首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
First of all, the parameters involved in Chen and Hsu (1995) are unknown. For convenience, we discussed the two possibilities of the unknown parameter μ namely μ =M and μ M, separately. In other words, we do not known wheter μ = M or μ M. Therefore, we must estimate it before drawing any statistical conclusion about the index Cpmk, The conclusion should be applied only after th ehypothesis if μ = M is tested.  相似文献   

2.
3.
4.
The strength of association between two dichotomous characteristics A and B can be measured in many ways. All of these statistics are biased when there is misclassification, and all are prevalence dependent whether or not their population values are. Measures lacking fixed endpoints for random and perfect association, such as sensitivity, specificity, risk ratios, and odds ratio, have a bias either so unpredictable or so large that the observable and true measures of association may bear little resemblance to each other. Reexpressions of these measures that fix the endpoints and other measures with fixed endpoints, such as kappa, phi, gamma, risk difference, and attributable risk, produce attenuated estimates of their true values. Disattenuating such estimators is possible using test—retest data.  相似文献   

5.
This paper provides a complete proof of the Welch–Berlekamp theorem on which the Welch–Berlekamp algorithm was founded. By introducing an analytic approach to coset–leader decoders for Reed–Solomon codes, the Welch–Berlekamp key-equation of error corrections is enlarged and a complete proof of the Welch–Berlekamp theorem is derived in a natural way, and the theorem is extended such that the BCH-bound constraint is moved.  相似文献   

6.
ABSTRACT

Recent efforts by the American Statistical Association to improve statistical practice, especially in countering the misuse and abuse of null hypothesis significance testing (NHST) and p-values, are to be welcomed. But will they be successful? The present study offers compelling evidence that this will be an extraordinarily difficult task. Dramatic citation-count data on 25 articles and books severely critical of NHST's negative impact on good science, underlining that this issue was/is well known, did nothing to stem its usage over the period 1960–2007. On the contrary, employment of NHST increased during this time. To be successful in this endeavor, as well as restoring the relevance of the statistics profession to the scientific community in the 21st century, the ASA must be prepared to dispense detailed advice. This includes specifying those situations, if they can be identified, in which the p-value plays a clearly valuable role in data analysis and interpretation. The ASA might also consider a statement that recommends abandoning the use of p-values.  相似文献   

7.
8.
9.
Distance sampling and capture–recapture are the two most widely used wildlife abundance estimation methods. capture–recapture methods have only recently incorporated models for spatial distribution and there is an increasing tendency for distance sampling methods to incorporated spatial models rather than to rely on partly design-based spatial inference. In this overview we show how spatial models are central to modern distance sampling and that spatial capture–recapture models arise as an extension of distance sampling methods. Depending on the type of data recorded, they can be viewed as particular kinds of hierarchical binary regression, Poisson regression, survival or time-to-event models, with individuals’ locations as latent variables and a spatial model as the latent variable distribution. Incorporation of spatial models in these two methods provides new opportunities for drawing explicitly spatial inferences. Areas of likely future development include more sophisticated spatial and spatio-temporal modelling of individuals’ locations and movements, new methods for integrating spatial capture–recapture and other kinds of ecological survey data, and methods for dealing with the recapture uncertainty that often arise when “capture” consists of detection by a remote device like a camera trap or microphone.  相似文献   

10.
11.
12.
It is important in computing science to estimate the number of data blocks needed to answer a query. Three different answers to this problem emerged respectively in 1975, 1977, and 1982. This article investigates the basic differences of the three models and observes that the same differences were shared a long time ago by the physicists who used different models for the behavior of elementary particles to obtain the Maxwell-Boltzmann statistics, the Bose-Einstein statistics, and the Fermi-Dirac statistics.  相似文献   

13.
14.
The ICH E9 guideline on Statistical Principles for Clinical Trials is a pivotal document for statisticians in clinical research in the pharmaceutical industry guiding, as it does, statistical aspects of the planning, conduct and analysis of regulatory clinical trials. New statisticians joining the industry require a thorough and lasting understanding of the 39-page guideline. Given the amount of detail to be covered, traditional (lecture-style) training methods are largely ineffective. Directed reading, perhaps in groups, may be a helpful approach, especially if experienced staff are involved in the discussions. However, as in many training scenarios, exercise-based training is often the most effective approach to learning. In this paper, we describe several variants of a training module in ICH E9 for new statisticians, combining directed reading with a game-based exercise, which have proved to be highly effective and enjoyable for course participants.  相似文献   

15.
This paper proposes a hierarchical Bayes estimator for a panel data random coefficient model with heteroskedasticity to assess the contribution of R&D capital to total factor productivity. Based on Hall (1993) data for 323 US firms over 1976–1990, we find that there appear to have substantial unobserved heterogeneity and heteroskedasticity across firms and industries that support the use of our Bayes inference procedure. We find much higher returns to R&D capital and a more pronounced downswing for the 1981–1985 period, followed by a more pronounced upswing than those yielded by the conventional feasible generalized least squares estimators or other estimates. The estimated elasticities of R&D capital are 0.062 for 1976–1980, 0.036 for 1981–1985 and 0.081 for 1986–1990, while the estimated elasticities of ordinary capital are much more stable over these periods.  相似文献   

16.
Zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models are recommended for handling excessive zeros in count data. For various reasons, researchers may not address zero inflation. This paper helps educate researchers on (1) the importance of accounting for zero inflation and (2) the consequences of misspecifying the statistical model. Using simulations, we found that when the zero inflation in the data was ignored, estimation was poor and statistically significant findings were missed. When overdispersion within the zero-inflated data was ignored, poor estimation and inflated Type I errors resulted. Recommendations on when to use the ZINB and ZIP models are provided. In an illustration using a two-step model selection procedure (likelihood ratio test and the Vuong test), the ZIP model was correctly identified only when the distributions had moderate means and sample sizes and did not correctly identify the ZINB model or the zero inflation in the ZIP and ZINB distributions.  相似文献   

17.
18.
19.
ABSTRACT

This article provides three approximate solutions to the multivariate Behrens–Fisher problem: the F statistic, the Bartlett, as well as the modified Bartlett corrected statistics. Empirical results indicate that the F statistic outperforms the other two and five existing procedures. The modified Bartlett corrected statistic is also very competitive.  相似文献   

20.
We consider an empirical Bayes approach to standard nonparametric regression estimation using a nonlinear wavelet methodology. Instead of specifying a single prior distribution on the parameter space of wavelet coefficients, which is usually the case in the existing literature, we elicit the ?-contamination class of prior distributions that is particularly attractive to work with when one seeks robust priors in Bayesian analysis. The type II maximum likelihood approach to prior selection is used by maximizing the predictive distribution for the data in the wavelet domain over a suitable subclass of the ?-contamination class of prior distributions. For the prior selected, the posterior mean yields a thresholding procedure which depends on one free prior parameter and it is level- and amplitude-dependent, thus allowing better adaptation in function estimation. We consider an automatic choice of the free prior parameter, guided by considerations on an exact risk analysis and on the shape of the thresholding rule, enabling the resulting estimator to be fully automated in practice. We also compute pointwise Bayesian credible intervals for the resulting function estimate using a simulation-based approach. We use several simulated examples to illustrate the performance of the proposed empirical Bayes term-by-term wavelet scheme, and we make comparisons with other classical and empirical Bayes term-by-term wavelet schemes. As a practical illustration, we present an application to a real-life data set that was collected in an atomic force microscopy study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号