首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes a new approach based on two explicit rules of Mendel experiments and Mendel's population genetics for the genetic algorithm (GA). These rules are the segregation and independent assortment of alleles, respectively. This new approach has been simulated for the optimization of certain test functions. The doctrinal sense of GA is conceptually improved by this approach using a Mendelian framework. The new approach is different than the conventional one in terms of crossover, recombination, and mutation operators. The results obtained here are in agreement with those of the conventional GA, and even better in some cases. These results suggest that the new approach is overall more sensitive and accurate than the conventional one. Possible ways of improving the approach by including more genetic formulae in the code are also discussed.  相似文献   

2.
The history of science is full of myths. Darwin has his fair share; but Gregor Mendel, his fellow scientist and contemporary, has suffered even more. R. Allan Reese disentangles what we like to believe about Mendel from what we should believe—and finds a modern species whose origin was not by conventional evolution.  相似文献   

3.
Abstract

The results of this survey underscore the controversy surrounding the elimination of OCLC's Serials Control Subsystem. The survey evoked responses more numerous and fervent than I anticipated when the project began. As might be expected, those libraries that used the system more extensively and checked in the higher number of formats on SCS were most likely to describe the impact of SCS's elimination as serious. Most of the respondents expected that their immediate choice for a new serials control system would be permanent. As one respondent commented, “…librarians cannot afford to be switching serials control systems every few years.” Among libraries using the Serials Control Subsystem, approximately one-third chose to switch to OCLC's SC350, but no alternative serials system emerged as the second choice.  相似文献   

4.
Taguchi's quality engineering concepts are of great importance in designing and improving product quality and process. However, most of the controversy and mystique have been centred on Taguchi's tactics. This research proposes an extension to the on-going research by investigating the probability of identifying insignificant factors as significant, or the so-called alpha error, with L16 orthogonal array for the larger-the-better type response variable using simulation. The response variables in the L16 array are generated from a normal distribution with the same mean and standard deviation. Consequently, the null hypothesis that all factors in the L16 array will be identified as insignificant is true. Simulation results, however, reveal that some insignificant factors are wrongly identified as significant with a very high probability, which may provide a risky parameter design. Therefore, efficient and more valid statistical tactics should be developed put out Taguchi's important quality engineering concepts into practice.  相似文献   

5.
The purpose of this note is to criticize Nguyen (1985) for his account of the literature on the generalization of Fisher's exact test and to point out parallels with existing algorithms of the algorithm proposed by Nguyen. Subsequently we will briefly raise some questions on the methodology proposed by Nguyen.

Nguyen (1985) suggests that all literature on exact testing prior to Nguyen & Sampson (1985) is based on the “more probable” relation or Exact Probability Test (EPT) as a test statistic. This is not correct. Yates (1934 - Pearson's X2), Lewontin & Felsenstein (1965 - X2), Agresti & Wackerly (1977 - X2, Kendall's tau, Kruskal & Goodman's gamma), Klotz (1966 - Wilcoxon), Klotz & Teng (1977 - Kruskall & Wallis' H), Larntz (1978 - X2, loglike-lihood-ratio statistic G2, Freeman & Tukey statistic), and several others have investigated exact tests with other statistics than the EPT. In fact, Bennett & Nakamura (1963) are incorrectly cited as they investigated both X2 and G2, rather than EPT. Also, Freeman & Halton (1951) are incorrectly cited for they generalized Fisher's exact test to pxq tables and not 2xq tables as stated. And they are even predated by Yates (1934) who extended the test to 2×3 tables.  相似文献   

6.
We introduce a modified version ?nof the piecewiss linear hisiugrimi uf Beirlant et al. (1998) which is a true probability density, i.e., ?n[d] 0 and [d]?n=1. We prove that ?nestimates the underlying densitv ? strongly consistently in the L1mmn, derive large deviation inequalities for the t\ error \?n- f\ and prove that £||/"-/|| tends to zero with the rate n -1\3, We also show that the derivative lf'n estimates consistently in ine expected Lx error the derivative/ of sufficiently smooth density and evaluate the rate of convergence n-i/5 for Epf'n -f'% The estimator/" thus enables to approximate/in the Besov space with a guaranteed rate of convergence. Optimization of the smoothing parameter is also studied. The theoretical or experimentally approximated values of the expected errors E\\?n- f\\ and E||2?'n-?' are compared with tiie errors aCiiieveu u-y t"e histogram of Beirlant et ah, and other nonparametric methods.  相似文献   

7.
In an informal way, some dilemmas in connection with hypothesis testing in contingency tables are discussed. The body of the article concerns the numerical evaluation of Cochran's Rule about the minimum expected value in r × c contingency tables with fixed margins when testing independence with Pearson's X2 statistic using the χ2 distribution.  相似文献   

8.
Berkson (1980) conjectured that minimum x2 was a superior procedure to that of maximum likelihood, especially with regard to mean squared error. To explore his conjecture, we analyze his (1955) bioassay problem related to logistic regression. We consider not only the criterion of mean squared error for the comparison of these estimators, but also include alternative criteria such as concentration functions and Pitman's measure of closeness. The choice of these latter criteria is motivated by Rao's (1981) considerations of the shortcomings of mean squared error. We also include several Rao-Blackwellized versions of the minimum logit x2 the purpose of these comparisons.  相似文献   

9.
In this paper we consider the issue of constructing retrospective T 2 control chart limits so as to control the overall probability of a false alarm at a specified value. We describe an exact method for constructing the control limits for retrospective examination. We then consider Bonferroni-adjustments to Alt's control limit and to the standard x 2 control limit as alternatives to the exact limit since it is computationally cumbersome to find the exact limit. We present the results of some simulation experiments that are carried out to compare the performance of these control limits. The results indicate that the Bonferroni-adjusted Alt's control limit performs better that the Bonferroni-adjusted x 2 control limit. Furthermore, it appears that the Bonferroni-adjusted Alt's control limit is more than adequate for controlling the overall false alarm probability at a specified value.  相似文献   

10.
The Bartlett's test (1937) for equality of variances is based on the χ2 distribution approximation. This approximation deteriorates either when the sample size is small (particularly < 4) or when the population number is large. According to a simulation investigation, we find a similar varying trend for the mean differences between empirical distributions of Bartlett's statistics and their χ2 approximations. By using the mean differences to represent the distribution departures, a simple adjustment approach on the Bartlett's statistic is proposed on the basis of equal mean principle. The performance before and after adjustment is extensively investigated under equal and unequal sample sizes, with number of populations varying from 3 to 100. Compared with the traditional Bartlett's statistic, the adjusted statistic is distributed more closely to χ2 distribution, for homogeneity samples from normal populations. The type I error is well controlled and the power is a little higher after adjustment. In conclusion, the adjustment has good control on the type I error and higher power, and thus is recommended for small samples and large population number when underlying distribution is normal.  相似文献   

11.
In this article, we test a new method of combining economic forecasts, the odds-matrix approach, using Clemen and Winkler's (1986) data on gross-national-product forecasts. For these data, the results show that the odds-matrix method is more accurate than each of the other methods tested and can be expected to be especially useful when data are nonstationary or sparse.  相似文献   

12.
Fisher's method of combining independent tests is used to construct tests of means of multivariate normal populations when the covariance matrix has intraclass correlation structure. Monte Carlo studies are reported which show that the tests are more powerful than Hotelling's T 2-test in both one and two sample situations.  相似文献   

13.
Abstract

From Picas to Pixels has a peek into the life of librarian.net's founder and publisher, Jessamyn West. We talk about everything from library school to Colin Powell, from shell commands to the Internet community's recent infatuation with tagging (or what we catalogers like to call subject analysis). I tried to lead Jessamyn to admit that we need more catalogers in the mix on projects that are using tagging extensively, like Flickr.com and the Missouri Botanical Garden's new “tagswarming” project, but she did not take the bait.  相似文献   

14.
Existing equivalence tests for multinomial data are valid asymptotically, but the level is not properly controlled for small and moderate sample sizes. We resolve this difficulty by developing an exact multinomial test for equivalence and an associated confidence interval procedure. We also derive a conservative version of the test that is easy to implement even for very large sample sizes. Both tests use a notion of equivalence that is based on the cumulative distribution function, with two probability vectors being considered equivalent if their partial sums never differ by more than some specified constant. We illustrate the methods by applying them to Weldon's dice data, to data on the digits of , and to data collected by Mendel. The Canadian Journal of Statistics 37: 47–59; © 2009 Statistical Society of Canada  相似文献   

15.
In this study we consider the problem of the improvement of the sample mean in the second order minimax estimation sense for a mean belonging to an unrestricted mean parameter space R+R+. We solve this problem for the class of natural exponential families (NEF's) whose variance functions (VF's) are regular at zero and at infinity. Such a class of VF's (or NEF's) is huge and contains (among others): Polynomial VF's (e.g., quadratic VF's in the Morris class, cubic VF's in the Letac&Mora class and VF's in the Hinde–Demétrio class); VF's belonging to the Tweedie class with power VF's, VF's belonging to the Babel class and many others. Moreover, we show that if the canonical parameter space of the corresponding NEF is RR (which is obviously the case if the support of the NEF is bounded), then the sample mean as an estimator of the mean cannot be further improved. This work presents an original constructive methodology and provides with constructive tools enabling to obtain explicit forms of the second order minimax estimators as well as the forms of the related weight functions. Our work establishes a substantial generalization of the results obtained so far in the literature. Illustrations of the resulting methods are provided and a simulation-based analysis is presented for the negative binomial case.  相似文献   

16.
Sophisticated statistical analyses of incidence frequencies are often required for various epidemiologic and biomedical applications. Among the most commonly applied methods is the Pearson's χ2 test, which is structured to detect non specific anomalous patterns of frequencies and is useful for testing the significance for incidence heterogeneity. However, the Pearson's χ2 test is not efficient for assessing the significance of frequency in a particular cell (or class) to be attributed to chance alone. We recently developed statistical tests for detecting temporal anomalies of disease cases based on maximum and minimum frequencies; these tests are actually designed to test of significance for a particular high or low frequency. The purpose of this article is to demonstrate merits of these tests in epidemiologic and biomedical studies. We show that our proposed methods are more sensitive and powerful for testing extreme cell counts than is the Pearson's χ2 test. This feature could provide important and valuable information in epidemiologic or biomeidcal studies. We elucidated and illustrated the differences in sensitivity among our tests and the Pearson's χ2 test by analyzing a data set of Langerhans cell histiocytosis cases and its hypothetical sets. We also computed and compared the statistical power of these methods using various sets of cell numbers and alternative frequencies. The investigation of statistical sensitivity and power presented in this work will provide investigators with useful guidelines for selecting the appropriate tests for their studies.  相似文献   

17.
Plutarch, in his book The Life of Lucullus, reports how the Roman general Lucullus estimated his enemies' food provisions. Lucullus' estimation was based on a sampling procedure. With respect to the history of statistics, this Plutarch's text is the earliest applied sampling ever written.  相似文献   

18.
This paper suggests a new stratified randomized response model based on Kuk's [Biometrika (1990), 77, 2, pp.436–438] model that has Neyman allocation and considerable gain in precision. It has been identified that the stratified randomized response models due to Kim and Warde (2004 Kim, J., Warde, W. (2004). A stratified Warner randomized response model. J. Stat. Plan. Inference 120:155165.[Crossref], [Web of Science ®] [Google Scholar]), Kim and Elam's (2005), and Kim and Elam's (2007) are members of the proposed model. It is shown that the proposed model is more efficient than Kuk's (1990) model both theoretically and empirically. The results of this paper are also extended in the situation when trials are repeated.  相似文献   

19.
In order to reach the inference about a linear combination of two independent binomial proportions, various procedures exist (Wald's classic method, the exact, approximate, or maximized score methods, and the Newcombe-Zou method). This article defines and evaluates 25 different methods of inference, and selects the ones with the best behavior. In general terms, the optimal method is the classic Wald method applied to the data to which z 2 α/2/4 successes and z 2 α/2/4 failures are added (≈1 if α = 5%) if no sample proportion has a value of 0 or 1 (otherwise the added increase may be different).

Supplemental materials are available for this article. Go to the publisher's online edition of Communications in Statistics - Simulation and Computation to view the free supplemental file.  相似文献   

20.
A Wiener process with unknown drift parameter μ is, beginning at O, observed continuously and one has to decide between the hypotheses μ≤0 and μ>0. For loss functions of the form sμr and linear cost functions one wants to determine a minimax sequential test. Generalizing the results of DeGroot (1960) a minimax test in the class of all symmetrical SPRT’s is given in an explicit form. On the other hand it is shown that this SPRT is, in general, no longer minimax in the class of all sequential tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号