首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
In this paper, Anbar's (1983) approach for estimating a difference between two binomial proportions is discussed with respect to a hypothesis testing problem. Such an approach results in two possible testing strategies. While the results of the tests are expected to agree for a large sample size when two proportions are equal, the tests are shown to perform quite differently in terms of their probabilities of a Type I error for selected sample sizes. Moreover, the tests can lead to different conclusions, which is illustrated via a simple example; and the probability of such cases can be relatively large. In an attempt to improve the tests while preserving their relative simplicity feature, a modified test is proposed. The performance of this test and a conventional test based on normal approximation is assessed. It is shown that the modified Anbar's test better controls the probability of a Type I error for moderate sample sizes.  相似文献   

2.
Some results on the estimation of a symmetric density function are given. For the case when the point of symmetry, θ, is known it is shown that a symmetrized kernel estimator is, as measured by MISE, approximately as good as a non-symmetrized one based on twice as many observations. This result remains ’true if the estimated density is a normal one and θ is estimated by the sample mean. Some Monte Carlo results for several densities and sample sizes are given for the case when θ is estimated by the sample median.  相似文献   

3.
Lehmann & Stein (1948) proved the existence of non-similar tests which can be more powerful than best similar tests. They used Student's problem of testing for a non-zero mean given a random sample from the normal distribution with unknown variance as an example. This raises the question: should we use a non-similar test instead of Student's t test? Questions like this can be answered by comparing the power of the test with the power envelope. This paper discusses the difficulties involved in computing power envelopes. It reports an empirical comparison of the power of the t test and the power envelope and finds that the two are almost identical especially for sample sizes greater than 20. These findings suggest that, as well as being uniformly most powerful (UMP) within the class of similar tests, Student's t test is approximately UMP within the class of all tests. For practical purposes it might also be regarded as UMP when moderate or large sample sizes are involved.  相似文献   

4.
For interval estimation of a proportion, coverage probabilities tend to be too large for “exact” confidence intervals based on inverting the binomial test and too small for the interval based on inverting the Wald large-sample normal test (i.e., sample proportion ± z-score × estimated standard error). Wilson's suggestion of inverting the related score test with null rather than estimated standard error yields coverage probabilities close to nominal confidence levels, even for very small sample sizes. The 95% score interval has similar behavior as the adjusted Wald interval obtained after adding two “successes” and two “failures” to the sample. In elementary courses, with the score and adjusted Wald methods it is unnecessary to provide students with awkward sample size guidelines.  相似文献   

5.
In finite population sampling, it has long been known that, for small sample sizes, when sampling from a skewed population, the usual frequentist intervals for the population mean cover the true value less often than their stated frequency of coverage. Recently, a non-informative Bayesian approach to some problems in finite population sampling has been developed, which is based on the 'Polya posterior'. For large sample sizes, these methods often closely mimic standard frequentist methods. In this paper, a modification of the 'Polya posterior', which employs the weighted Polya distribution, is shown to give interval estimators with improved coverage properties for problems with skewed populations and small sample sizes. This approach also yields improved tests for hypotheses about the mean of a skewed distribution.  相似文献   

6.
This paper is concerned with testing the equality of two high‐dimensional spatial sign covariance matrices with applications to testing the proportionality of two high‐dimensional covariance matrices. It is interesting that these two testing problems are completely equivalent for the class of elliptically symmetric distributions. This paper develops a new test for testing the equality of two high‐dimensional spatial sign covariance matrices based on the Frobenius norm of the difference between two spatial sign covariance matrices. The asymptotic normality of the proposed testing statistic is derived under the null and alternative hypotheses when the dimension and sample sizes both tend to infinity. Moreover, the asymptotic power function is also presented. Simulation studies show that the proposed test performs very well in a wide range of settings and can be allowed for the case of large dimensions and small sample sizes.  相似文献   

7.
A large‐sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two‐sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk–Koch, and Reduced Falk–Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two‐sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates’ continuity correction is recommended for balanced designs and the Falk–Koch method with Yates’ correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk–Koch method is recommended, although in this case, all methods tend to be over‐conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
The negative binomial distribution offers an alternative view to the binomial distribution for modeling count data. This alternative view is particularly useful when the probability of success is very small, because, unlike the fixed sampling scheme of the binomial distribution, the inverse sampling approach allows one to collect enough data in order to adequately estimate the proportion of success. However, despite work that has been done on the joint estimation of two binomial proportions from independent samples, there is little, if any, similar work for negative binomial proportions. In this paper, we construct and investigate three confidence regions for two negative binomial proportions based on three statistics: the Wald (W), score (S) and likelihood ratio (LR) statistics. For large-to-moderate sample sizes, this paper finds that all three regions have good coverage properties, with comparable average areas for large sample sizes but with the S method producing the smaller regions for moderate sample sizes. In the small sample case, the LR method has good coverage properties, but often at the expense of comparatively larger areas. Finally, we apply these three regions to some real data for the joint estimation of liver damage rates in patients taking one of two drugs.  相似文献   

9.
The Fisher distribution is frequently used as a model for the probability distribution of directional data, which may be specified either in terms of unit vectors or angular co-ordinates (co-latitude and azimuth). If, in practical situations, only the co-latitudes can be observed, the available data must be regarded as a sample from the corresponding marginal distribution. This paper discusses the estimation by Maximum Likelihood (ML) and the Method of Moments of the two parameters of this marginal Fisher distribution. The moment estimators are generally simpler to compute than the ML estimators, and have high asymptotic efficiency.  相似文献   

10.
We examine empirical relevance of three alternative asymptotic approximations to the distribution of instrumental variables estimators by Monte Carlo experiments. We find that conventional asymptotics provides a reasonable approximation to the actual distribution of instrumental variables estimators when the sample size is reasonably large. For most sample sizes, we find Bekker[11] asymptotics provides reasonably good approximation even when the first stage R2 is very small. We conclude that reporting Bekker[11] confidence interval would suffice for most microeconometric (cross-sectional) applications, and the comparative advantage of Staiger and Stock[5] asymptotic approximation is in applications with sample sizes typical in macroeconometric (time series) applications.  相似文献   

11.
A double acceptance sampling plan for the truncated life test is developed assuming that the lifetime of a product follows a generalized log-logistic distribution with known shape parameters. The zero and one failure scheme is mainly considered, where the lot is accepted if no failures are observed from the first sample and it is rejected if two or more failures occur. When there is one failure from the first sample, the second sample is drawn and tested for the same duration as the first sample. The minimum sample sizes of the first and second samples are determined to ensure that the true median life is longer than the given life at the specified consumer’s confidence level. The operating characteristics are analyzed according to various ratios of the true median life to the specified life. The minimum such ratios are also obtained so as to lower the producer’s risk at the specified level. The results are explained with examples.  相似文献   

12.
An empirical distribution function estimator for the difference of order statistics from two independent populations can be used for inference between quantiles from these populations. The inferential properties of the approach are evaluated in a simulation study where different sample sizes, theoretical distributions, and quantiles are studied. Small to moderate sample sizes, tail quantiles, and quantiles which do not coincide with the expectation of an order statistic are identified as problematic for appropriate Type I error control.  相似文献   

13.
This paper studies the effect of autocorrelation on the smoothness of the trend of a univariate time series estimated by means of penalized least squares. An index of smoothness is deduced for the case of a time series represented by a signal-plus-noise model, where the noise follows an autoregressive process of order one. This index is useful for measuring the distortion of the amount of smoothness by incorporating the effect of autocorrelation. Different autocorrelation values are used to appreciate the numerical effect on smoothness for estimated trends of time series with different sample sizes. For comparative purposes, several graphs of two simulated time series are presented, where the estimated trend is compared with and without autocorrelation in the noise. Some findings are as follows, on the one hand, when the autocorrelation is negative (no matter how large) or positive but small, the estimated trend gets very close to the true trend. Even in this case, the estimation is improved by fixing the index of smoothness according to the sample size. On the other hand, when the autocorrelation is positive and large the simulated and estimated trends lie far away from the true trend. This situation is mitigated by fixing an appropriate index of smoothness for the estimated trend in accordance to the sample size at hand. Finally, an empirical example serves to illustrate the use of the smoothness index when estimating the trend of Mexico’s quarterly GDP.  相似文献   

14.
The purpose of thls paper is to investlgate the performance of the LDF (linear discrlmlnant functlon) and QDF (quadratic dlscrminant functlon) for classlfylng observations from the three types of univariate and multivariate non-normal dlstrlbutlons on the basls of the mlsclasslficatlon rate. The theoretical and the empirical results are described for unlvariate distributions, and the empirical results are presented for multivariate distributions. It 1s also shown that the sign of the skewness of each population and the kurtosis have essential effects on the performance of the two discriminant functions. The variations of the populatlon speclflc mlsclasslflcatlon rates are greatly depend on the sample slze. For the large dlmenslonal populatlon dlstributlons, if the sample sizes are sufflclent, the QDF performs better than the LDF. We show the crlterla of a cholce between the two discriminant functions as an application.  相似文献   

15.
For the exchangeable binary data with random cluster sizes, we use a pairwise likelihood procedure to give a set of approximately optimal unbiased estimating equations for estimating the mean and variance parameters. Theoretical results are obtained establishing the large sample properties of the solutions to the estimating equations. An application to a developmental toxicity study is given. Simulation results show that the pairwise likelihood procedure is valid and performs better than the GEE procedure for the exchangeable binary data.  相似文献   

16.
We consider the problem of sequentially deciding which of two treatments is superior, A class of simple approximate sequential tests is proposed. These have the probabilities of correct selection approximately independent of the sampling rule and depending on unknown parameters only through the function of interest, such as the difference or ratio of mean responses. The tests are obtained by using a normal approximation, and this is employed to derive approximate expressions for the probabilities of correct selection and the expected sample sizes. A class of data-dependent sampling rules is proposed for minimizing any weighted average of the expected sample sizes on the two treatments, with the weights being allowed to depend on unknown parameters. The tests are studied in the particular cases of exponentially.  相似文献   

17.
In this paper we propose a sequential procedure to design optimum experiments for discriminating between two binary data models. For the problem to be fully specified, not only the mode1link functions should be provided but also their associated linear predictor structures. Further, we suppose that one of the models is true, albeit it is not known which of them. Under these assumptions the procedure consists of making sequential choices of single experimental units to discriminate between the rival models as efficiently as possible. Depending on whether the models are nested or not, alternative methods are proposed.

To illustrate the procedure, a simulation study for the classical case of pro bit versus logit model is presented. It enables us to estimate the total sample sizes required to gain a certain power of discrimination and compare them to sample sizes for methods that were previously suggested in the literature.  相似文献   

18.
A consistent test for difference in locations between two bivariate populations is proposed, The test is similar as the Mann-Whitney test and depends on the exceedances of slopes of the two samples where slope for each sample observation is computed by taking the ratios of the observed values. In terms of the slopes, it reduces to a univariate problem, The power of the test has been compared with those of various existing tests by simulation. The proposed test statistic is compared with Mardia's(1967) test statistics, Peters-Randies(1991) test statistic, Wilcoxon's rank sum test. statistic and Hotelling' T2 test statistic using Monte Carlo technique. It performs better than other statistics compared for small differences in locations between two populations when underlying population is population 7(light tailed population) and sample size 15 and 18 respectively. When underlying population is population 6(heavy tailed population) and sample sizes are 15 and 18 it performas better than other statistic compared except Wilcoxon's rank sum test statistics for small differences in location between two populations. It performs better than Mardia's(1967) test statistic for large differences in location between two population when underlying population is bivariate normal mixture with probability p=0.5, population 6, Pearson type II population and Pearson type VII population for sample size 15 and 18 .Under bivariate normal population it performs as good as Mardia' (1967) test statistic for small differences in locations between two populations and sample sizes 15 and 18. For sample sizes 25 and 28 respectively it performs better than Mardia's (1967) test statistic when underlying population is population 6, Pearson type II population and Pearson type VII population  相似文献   

19.
Attributes sampling is an important inspection tool in areas like product quality control, service quality control or auditing. The classical item quality scheme of attributes sampling distinguishes between conforming and nonconforming items, and measures lot quality by the lot fraction nonconforming. A more refined quality scheme rates item quality by the number of nonconformities occurring on the item, e.g., the number of defective components in a composite product or the number of erroneous entries in an accounting record, where lot quality is measured by the average number of nonconformities occurring on items in the lot. Statistical models of sampling for nonconformities rest on the idealizing assumption that the number of nonconformities on an item is unbounded. In most real cases, however, the number of nonconformities on an item has an upper bound, e.g., the number of product components or the number of entries in an accounting record. The present study develops two statistical models of sampling lots for nonconformities in the presence of an upper bound a for the number of nonconformities on each single item. For both models, the statistical properties of the sample statistics and the operating characteristics of single sampling plans are investigated. A broad numerical study compares single sampling plans with prescribed statistical properties under the bounded and unbounded quality schemes. In a large number of cases, the sample sizes for the realistic bounded models are smaller than the sample sizes for the idealizing unbounded model.  相似文献   

20.
Three sampling designs are considered for estimating the sum of k population means by the sum of the corresponding sample means. These are (a) the optimal design; (b) equal sample sizes from all populations; and (c) sample sizes that render equal variances to all sample means. Designs (b) and (c) are equally inefficient, and may yield a variance up to k times as large as that of (a). Similar results are true when the cost of sampling is introduced, and they depend on the population sampled.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号