首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper the pseudo-random number generators which are implemented in the widely used Commondore and Apple microcomputers are discussed. The results of this investigation show that these generators are not useful for scientific work. In particular short periods appear in the sequences of the generated pseudo-random numbers.  相似文献   

2.
Statistical tests of significance are carried out on the feedback shift register pseudo-random number generator employed on the BBC microcomputer. The tests are based on the practicalities of using a microcomputer in simulations for statistical education. The results indicate that the generator is not universally acceptable in this role.  相似文献   

3.
Taylor and Thompson [15] introduced a clever algorithm for simulating multivariate continuous data sets that resemble the original data. Their approach is predicated upon determining a few nearest neighbors of a given row of data through a statistical distance measure, and subsequently combining the observations by stochastic multipliers that are drawn from a uniform distribution to generate simulated data that essentially maintain the original data trends. The newly drawn values are assumed to come from the same underlying hypothetical process that governs the mechanism of how the data are formed. This technique is appealing in that no density estimation is required. We believe that this data-based simulation method has substantial potential in multivariate data generation due to the local nature of the generation scheme, which does not have strict specification requirements as in most other algorithms. In this work, we provide two R routines: one has a built-in simulator for finding the optimal number of nearest neighbors for any given data set, and the other generates pseudo-random data using this optimal number.  相似文献   

4.
An easily implemented and computationally efficient procedure is presented for the generation of autocorrelated pseudo-random numbers with specific probability distributions. A plot illustrates the relationship among the autocorrelations of the uniform, Rayleigh, and exponential distributions corresponding to a given autocorrelation in the normal generating distribution.  相似文献   

5.
This article presents a constrained maximization of the Shapiro Wilk W statistic for estimating parameters of the Johnson S B distribution. The gradient of the W statistic with respect to the minimum and range parameters is used within a quasi-Newton framework to achieve a fit for all four parameters. The method is evaluated with measures of bias and precision using pseudo-random samples from three different S B populations. The population means were estimated with an average relative bias of less than 0.1% and the population standard deviations with less than 4.0% relative bias. The methodology appears promising as a tool for fitting this sometimes difficult distribution.  相似文献   

6.
This paper provides specific directions for the preparation of a discrete pseudo-random number generator computer program using basic machine instructions. The scheme is a table look-up first suggested by Marsaglia. It is applicable to any discrete probability distribution. The general procedure is described for probabilities expressed as fractions in a number system of arbitrary base β. A brief example is given using the decimal system. Flow diagrams accompany the directions which will enable an experienced programmer to write the program for any computer system with only modest storage requirements. Results of chi-square tests performed on samples from specific binomial, Poisson, negative binomial, and hypergeornetric distributions generated using this procedure are given  相似文献   

7.
Assume that a k-element vector time series follows a vector autoregressive (VAR) model. Obtaining simultaneous forecasts of the k elements of the vector time series is an important problem. Based on the Bonferroni inequality, Lutkepohl (1991) derived the procedures which construct the conservative joint forecast regions for the VAR model. In this paper, we propose to use an exact method which provides shorter prediction intervals than does the Bonferroni method. Three illustrative examples are given for comparison of the various VAR forecasting procedures.  相似文献   

8.
9.
The idea of searching for orthogonal projections, from a multidimensional space into a linear subspace, as an aid to detecting non-linear structure has been named exploratory projection pursuit.Most approaches are tied to the idea of searching for interesting projections. Typically, an interesting projection is one where the distribution of the projected data differs from the normal distribution. In this paper we define two projection indices which are aimed specifically at finding projections that best show grouped structure in the plane, if this exists in the multi-dimensional space. These involve a numerical optimization problem which is tackled in two stages, the projection and the pursuit; the first is based on a procedure to generate pseudo-random rotation matrices in the sense of the grand tour by D. Asimov (1985), and the second is a local numerical optimization procedure. One artificial and one real example illustrate the performance of the suggested indices.  相似文献   

10.
In this paper, we consider the classification of high-dimensional vectors based on a small number of training samples from each class. The proposed method follows the Bayesian paradigm, and it is based on a small vector which can be viewed as the regression of the new observation on the space spanned by the training samples. The classification method provides posterior probabilities that the new vector belongs to each of the classes, hence it adapts naturally to any number of classes. Furthermore, we show a direct similarity between the proposed method and the multicategory linear support vector machine introduced in Lee et al. [2004. Multicategory support vector machines: theory and applications to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association 99 (465), 67–81]. We compare the performance of the technique proposed in this paper with the SVM classifier using real-life military and microarray datasets. The study shows that the misclassification errors of both methods are very similar, and that the posterior probabilities assigned to each class are fairly accurate.  相似文献   

11.
Although multivariate statistical process control has been receiving a well-deserved attention in the literature, little work has been done to deal with multi-attribute processes. While by the NORTA algorithm one can generate an arbitrary multi-dimensional random vector by transforming a multi-dimensional standard normal vector, in this article, using inverse transformation method, we initially transform a multi-attribute random vector so that the marginal probability distributions associated with the transformed random variables are approximately normal. Then, we estimate the covariance matrix of the transformed vector via simulation. Finally, we apply the well-known T 2 control chart to the transformed vector. We use some simulation experiments to illustrate the proposed method and to compare its performance with that of the deleted-Y method. The results show that the proposed method works better than the deleted-Y method in terms of the out-of-control average run length criterion.  相似文献   

12.
The present work addresses the question how sampling algorithms for commonly applied copula models can be adapted to account for quasi-random numbers. Besides sampling methods such as the conditional distribution method (based on a one-to-one transformation), it is also shown that typically faster sampling methods (based on stochastic representations) can be used to improve upon classical Monte Carlo methods when pseudo-random number generators are replaced by quasi-random number generators. This opens the door to quasi-random numbers for models well beyond independent margins or the multivariate normal distribution. Detailed examples (in the context of finance and insurance), illustrations and simulations are given and software has been developed and provided in the R packages copula and qrng.  相似文献   

13.
Maximum-likelihood estimation technique is known to provide consistent and most efficient regression estimates but often this technique is tedious to implement, particularly in the modelling of correlated count responses. To overcome this limitation, researchers have developed semi- or quasi-likelihood functions that depend only on the correct specification of the mean and variance of the responses rather than on the distribution function. Moreover, quasi-likelihood estimation provides consistent and equally efficient estimates as the maximum-likelihood approach. Basically, the quasi-likelihood estimating function is a non-linear equation constituting of the gradient, Hessian and basic score matrices. Henceforth, to obtain estimates of the regression parameters, the quasi-likelihood equation is solved iteratively using the Newton–Raphson technique. However, the inverse of the Jacobian matrix involved in the Newton–Raphson method may not be easy to compute since the matrix is very close to singularity. In this paper, we consider the use of vector divisions in solving quasi-likelihood equations. The vector divisions are implemented to form secant method formulas. To assess the performance of the use of vector divisions with the secant method, we generate cross-sectional Poisson counts using different sets of mean parameters. We compute the estimates of the regression parameters using the Newton–Raphson technique and vector divisions and compare the number of non-convergent simulations under both algorithms.  相似文献   

14.
A vector of k positive coordinates lies in the k-dimensional simplex when the sum of all coordinates in the vector is constrained to equal 1. Sampling distributions efficiently on the simplex can be difficult because of this constraint. This paper introduces a transformed logit-scale proposal for Markov Chain Monte Carlo that naturally adjusts step size based on the position in the simplex. This enables efficient sampling on the simplex even when the simplex is high dimensional and/or includes coordinates of differing orders of magnitude. Implementation of this method is shown with the SALTSampler R package and comparisons are made to other simpler sampling schemes to illustrate the improvement in performance this method provides. A simulation of a typical calibration problem also demonstrates the utility of this method.  相似文献   

15.
In this paper, we investigate empirical likelihood (EL) inference for density-weighted average derivatives in nonparametric multiple regression models. A simply adjusted empirical log-likelihood ratio for the vector of density-weighted average derivatives is defined and its limiting distribution is shown to be a standard Chi-square distribution. To increase the accuracy and coverage probability of confidence regions, an EL inference procedure for the rescaled parameter vector is proposed by using a linear instrumental variables regression. The new method shares the same properties of the regular EL method with i.i.d. samples. For example, estimation of limiting variances and covariances is not needed. A Monte Carlo simulation study is presented to compare the new method with the normal approximation method and an existing EL method.  相似文献   

16.
A Gauss–Markov model is said to be singular if the covariance matrix of the observable random vector in the model is singular. In such a case, there exist some natural restrictions associated with the observable random vector and the unknown parameter vector in the model. In this paper, we derive through the matrix rank method a necessary and sufficient condition for a vector of parametric functions to be estimable, and necessary and sufficient conditions for a linear estimator to be unbiased in the singular Gauss–Markov model. In addition, we give some necessary and sufficient conditions for the ordinary least-square estimator (OLSE) and the best linear unbiased estimator (BLUE) under the model to satisfy the natural restrictions.   相似文献   

17.
This paper develops a method of estimating micro-level poverty in cases where data are scarce. The method is applied to estimate district-level poverty using the household level Indian national sample survey data for two states, viz., West Bengal and Madhya Pradesh. The method involves estimation of state-level poverty indices from the data formed by pooling data of all the districts (each time excluding one district) and multiplying this poverty vector with a known weight matrix to obtain the unknown district-level poverty vector. The proposed method is expected to yield reliable estimates at the district level, because the district-level estimate is now based on a much larger sample size obtained by pooling data of several districts. This method can be an alternative to the “small area estimation technique” for estimating poverty at sub-state levels in developing countries.  相似文献   

18.
门限协整套利:理论与实证研究   总被引:1,自引:0,他引:1       下载免费PDF全文
 不同市场上的同质或相似商品的价格存在长期均衡关系,当价格偏离均衡时,由于套利交易的存在,偏离会迅速回到均衡。在一定的门限值以外,二者服从协整关系,在门限值以内,二者没有协整关系,这种关系称为门限协整。本文在Balke,Fomby(1997)[1]和Hasen(1996)[6]的基础上提出了基于门限向量误差修正模型(T-VECM)的sup-Wald检验,用Bootstrap方法模拟统计量的渐进分布,验证了英国富时指数期货(uk100)和德国法兰克福指数期货(ger30)的门限协整关系,并用Hasen,Seo(2002)[11]提出的极大似然估计方法(MLE)同时估计出门限参数和协整向量,并给出了在这种门限协整关系下进行跨市场无风险套利的策略。  相似文献   

19.
The problem of generating pseudo-random unit vectors from the Fisher-Bingham distribution is considered. Attention is focussed on a subfamily known as FB6' which includes many of the spherical distributions of practical interest, including those of Dimroth-Watson, FB4' FB5 and Bingham type. The rejection procedures suggested here for these distributions should prove adequate for most practical purposes. Possibilities for simulating from the general Fisher-Bingham distribution are also mentioned briefly.  相似文献   

20.
The first step in statistical analysis is the parameter estimation. In multivariate analysis, one of the parameters of interest to be estimated is the mean vector. In multivariate statistical analysis, it is usually assumed that the data come from a multivariate normal distribution. In this situation, the maximum likelihood estimator (MLE), that is, the sample mean vector, is the best estimator. However, when outliers exist in the data, the use of sample mean vector will result in poor estimation. So, other estimators which are robust to the existence of outliers should be used. The most popular robust multivariate estimator for estimating the mean vector is S-estimator with desirable properties. However, computing this estimator requires the use of a robust estimate of mean vector as a starting point. Usually minimum volume ellipsoid (MVE) is used as a starting point in computing S-estimator. For high-dimensional data computing, the MVE takes too much time. In some cases, this time is so large that the existing computers cannot perform the computation. In addition to the computation time, for high-dimensional data set the MVE method is not precise. In this paper, a robust starting point for S-estimator based on robust clustering is proposed which could be used for estimating the mean vector of the high-dimensional data. The performance of the proposed estimator in the presence of outliers is studied and the results indicate that the proposed estimator performs precisely and much better than some of the existing robust estimators for high-dimensional data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号