We consider the problem of hypothesis-testing under a logistic model with two dichotomous independent variables. In particular, we consider the case in which the coefficients β1, and β2 of these variables are known on an a priori basis to not be of opposite sign. For this situation we show that there exists a simple nonparametric altenative to the likelihood ratio test for testing H0: β1 = β2 = 0 VS.H1 at least one β1 = 0. We find the asympotic relative efficiency of this test and show that it exceeds 0.90 under a wide range of conditions. We also given an example. 相似文献
Ever since R. A. Fisher published his 1936 article , "Has Mendel's Work Been Rediscovered?", historians of both biclogy and statistics have been fascinated by the surprisingly high conformity between Gregor (Johann) Mendel's observed and expected ratios in his famous experiments with peas. Fisher's calculatftl x2 statistic of the experiments, taken as a whole, suggested that results on a par or better than those Mendel reported coultl only be expected to occur about three times in every 100,000 attempts. The ensuing controversy as to whether or not the good Father "sophisticated" his data has continued to this very day. In recent years the controversy has focused upon the more technical question of what underlying genetic arrangement Mendel actually studied. The statistical issues of the controversy are exam:.led in am historical and comparative perspective. The changes the controversy has gone through are evaluated, and the nature of its current, more biological, status is briefly discussed. 相似文献
Analytical methods for interval estimation of differences between variances have not been described. A simple analytical method is given for interval estimation of the difference between variances of two independent samples. It is shown, using simulations, that confidence intervals generated with this method have close to nominal coverage even when sample sizes are small and unequal and observations are highly skewed and leptokurtic, provided the difference in variances is not very large. The method is also adapted for testing the hypothesis of no difference between variances. The test is robust but slightly less powerful than Bonett's test with small samples. 相似文献
A simulation study was done to compare seven confidence interval methods, based on the normal approximation, for the difference of two binomial probabilities. Cases considered included minimum expected cell sizes ranging from 2 to 15 and smallest group sizes (NMIN) ranging from 6 to 100. Our recommendation is to use a continuity correction of 1/(2 NMIN) combined with the use of (N ? 1) rather than N in the estimate of the standard error. For all of the cases considered with minimum expected cell size of at least 3, this method gave coverage probabilities close to or greater than the nominal 90% and 95%. The Yates method is also acceptable, but it is slightly more conservative. At the other extreme, the usual method (with no continuity correction) does not provide adequate coverage even at the larger sample sizes. For the 99% intervals, our recommended method and the Yates correction performed equally well and are reasonable for minimum expected cell sizes of at least 5. None of the methods performed consistently well for a minimum expected cell size of 2. 相似文献
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose‐response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose‐response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose‐response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose‐response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database. 相似文献
We compare the relative influence of different celebrity endorser attributes on respondents’ intentions to donate to a fictitious charity. The celebrity endorser attributes we modeled are expertise, admirability, likeability, trustworthiness, and attractiveness. We examine the moderating effects of audience sex, and general attitudes toward charities. Finally, we examined the mediating effects of perceived endorser fit with the endorsed charity. Our results find that endorser expertise and admirability are significant predictors of audience donation intentions. Audience general attitudes toward charities are a significant moderator of the influence of endorser expertise and admirability on donation intentions. We discuss the implications of our findings for researchers and practitioners.
In pure population problems, a single resource is to be distributed equally among the agents in a society, and the social
planner chooses population size(s) and per-capita consumption(s) for each resource constraint and set of feasible population
sizes within the domain of the solution. This paper shows that a weak condition regarding the possible choice of a zero population
is necessary and sufficient for the rationalizability of a solution by a welfarist social ordering. In addition, solutions
that are rationalized by critical-level generalized utilitarianism are characterized by means of a homogeneity property.
Received: 1 December 1997/Accepted: 26 February 1998 相似文献
We report on an empirical investigation of the modified rescaled adjusted range or R/S statistic that was proposed by Lo, 1991. Econometrica 59, 1279–1313, as a test for long-range dependence with good robustness properties under ‘extra’ short-range dependence. In contrast to the classical R/S statistic that uses the standard deviation S to normalize the rescaled range R, Lo's modified R/S-statistic Vq is normalized by a modified standard deviation Sq which takes into account the covariances of the first q lags, so as to discount the influence of the short-range dependence structure that might be present in the data. Depending on the value of the resulting test-statistic Vq, the null hypothesis of no long-range dependence is either rejected or accepted. By performing Monte-Carlo simulations with ‘truly’ long-range- and short-range dependent time series, we study the behavior of Vq, as a function of q, and uncover a number of serious drawbacks to using Lo's method in practice. For example, we show that as the truncation lag q increases, the test statistic Vq has a strong bias toward accepting the null hypothesis (i.e., no long-range dependence), even in ideal situations of ‘purely’ long-range dependent data. 相似文献