首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
General conditions for the asymptotic efficiency of certain new inference procedures based on empirical transform functions are developed. A number of important processes, such as the empirical characteristic function, the empirical moment generating function, and the empirical moments, are considered as special cases.  相似文献   

4.
Statistical methods have the potential of being effectively used by industrial practitioners if they satisfied two criteria: functionality and usability. Statistical methods are usually the product of statistical research activities of universities and other research organizations. Some already satisfy these criteria; however, many do not. The effect is that potentially relevant methods are not used in practice as often as they could be. In this paper we will present an approach regarding ‘statistics development,’ in which the end-user is given a central position, so that the results from statistical research aim to meet the needs and requirements of the practitioner. Examples of known and new methods will be presented, and we will discuss issues such as education in statistics, the link with statistical consultancy and publication of methods through various channels.  相似文献   

5.
The rules of American football favor the strategic placement of the 11 players per team making the identification of statistical tendencies a particularly useful capability. Gambling on American football games is explained. Several automated prediction techniques are discussed and compared, including least squares, weighted least squares, James-Stein, and Harville. A more data-intensive approach is discussed. That approach has coaching implications as well as predictive ability.  相似文献   

6.
Conditional and marginal likelihood analysis has a long history of development. Some recent methods using exact and approximate density and distribution functions lead to more sharply defined likelihoods and to accurate observed levels of significance for a wide range of problems including nonnormal regression and exponential linear models. These developments will be surveyed.  相似文献   

7.
With the rapid growth of modern technology, many biomedical studies are being conducted to collect massive datasets with volumes of multi‐modality imaging, genetic, neurocognitive and clinical information from increasingly large cohorts. Simultaneously extracting and integrating rich and diverse heterogeneous information in neuroimaging and/or genomics from these big datasets could transform our understanding of how genetic variants impact brain structure and function, cognitive function and brain‐related disease risk across the lifespan. Such understanding is critical for diagnosis, prevention and treatment of numerous complex brain‐related disorders (e.g., schizophrenia and Alzheimer's disease). However, the development of analytical methods for the joint analysis of both high‐dimensional imaging phenotypes and high‐dimensional genetic data, a big data squared (BD2) problem, presents major computational and theoretical challenges for existing analytical methods. Besides the high‐dimensional nature of BD2, various neuroimaging measures often exhibit strong spatial smoothness and dependence and genetic markers may have a natural dependence structure arising from linkage disequilibrium. We review some recent developments of various statistical techniques for imaging genetics, including massive univariate and voxel‐wise approaches, reduced rank regression, mixture models and group sparse multi‐task regression. By doing so, we hope that this review may encourage others in the statistical community to enter into this new and exciting field of research. The Canadian Journal of Statistics 47: 108–131; 2019 © 2019 Statistical Society of Canada  相似文献   

8.
Proactive evaluation of drug safety with systematic screening and detection is critical to protect patients' safety and important in regulatory approval of new drug indications and postmarketing communications and label renewals. In recent years, quite a few statistical methodologies have been developed to better evaluate drug safety through the life cycle of the product development. The statistical methods for flagging safety signals have been developed in two major areas – one for data collected from spontaneous reporting system, mostly in the postmarketing area, and the other for data from clinical trials. To our knowledge, the methods developed for one area have not been applied to the other one so far. In this article, we propose to utilize all such methods for flagging safety signals in both areas regardless of which specific area they were originally developed for. Therefore, we selected eight typical methods, that is, proportional reporting ratios, reporting odds ratios, the maximum likelihood ratio test, Bayesian confidence propagation neural network method, chi‐square test for rates comparison, Benjamini and Hochberg procedure, new double false discovery rate control procedure, and Bayesian hierarchical mixture model for systematic comparison through simulations. The Benjamini and Hochberg procedure and new double false discovery rate control procedure perform best overall in terms of sensitivity and false discovery rate. The likelihood ratio test also performs well when the sample sizes are large. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
Statistical disclosure control (SDC) is a balancing act between mandatory data protection and the comprehensible demand from researchers for access to original data. In this paper, a family of methods is defined to ‘mask’ sensitive variables before data files can be released. In the first step, the variable to be masked is ‘cloned’ (C). Then, the duplicated variable as a whole or just a part of it is ‘suppressed’ (S). The masking procedure's third step ‘imputes’ (I) data for these artificial missings. Then, the original variable can be deleted and its masked substitute has to serve as the basis for the analysis of data. The idea of this general ‘CSI framework’ is to open the wide field of imputation methods for SDC. The method applied in the I-step can make use of available auxiliary variables including the original variable. Different members of this family of methods delivering variance estimators are discussed in some detail. Furthermore, a simulation study analyzes various methods belonging to the family with respect to both, the quality of parameter estimation and privacy protection. Based on the results obtained, recommendations are formulated for different estimation tasks.  相似文献   

10.
A tutorial introduction is given to a limited selection. of artificial neural network models. Particular emphasis is laid on multi-layer perceptrons and simple Hopfield associative memories. In the context of perceptrons, two case studies are described, concerning Zip code detection and coin recognition. A simple experiment is reported with a Hopfield net. Some other approaches and applications are briefly listed, and bibliographical remarks are made. Throughout, points of contact with mainstream statistical methodology are highlighted.  相似文献   

11.
A tutorial introduction is given to a limited selection. of artificial neural network models. Particular emphasis is laid on multi-layer perceptrons and simple Hopfield associative memories. In the context of perceptrons, two case studies are described, concerning Zip code detection and coin recognition. A simple experiment is reported with a Hopfield net. Some other approaches and applications are briefly listed, and bibliographical remarks are made. Throughout, points of contact with mainstream statistical methodology are highlighted.  相似文献   

12.
对改进中国失业统计方法的探讨   总被引:7,自引:0,他引:7       下载免费PDF全文
樊茂勇 《统计研究》2001,18(6):19-23
失业问题是人类进入工业化以来一直困扰世界各国的经济和社会问题 ,也是与债务、通货膨胀并列的世界三大难题之一。中国在步入市场经济的过程中 ,同样不可避免地遇到了失业问题的困扰。目前在中国失业现象已普遍存在 ,并且成为影响社会稳定与经济发展的一个非常不利的因素 ,因而引起了社会各界的高度重视。但长期以来 ,中国一直否认社会主义社会有失业问题存在 ,导致国家失业统计纪录的真空。直到 1 993年有关统计部门才将原来所谓的“待业人员”改称为“失业人员” ,这虽然是一个历史性的进步 ,但在失业统计的理论方法以及具体操作环节上却…  相似文献   

13.
In pre-clinical oncology studies, tumor-bearing animals are treated and observed over a period of time in order to measure and compare the efficacy of one or more cancer-intervention therapies along with a placebo/standard of care group. A data analysis is typically carried out by modeling and comparing tumor volumes, functions of tumor volumes, or survival. Data analysis on tumor volumes is complicated because animals under observation may be euthanized prior to the end of the study for one or more reasons, such as when an animal's tumor volume exceeds an upper threshold. In such a case, the tumor volume is missing not-at-random for the time remaining in the study. To work around the non-random missingness issue, several statistical methods have been proposed in the literature, including the rate of change in log tumor volume and partial area under the curve. In this work, an examination and comparison of the test size and statistical power of these and other popular methods for the analysis of tumor volume data is performed through realistic Monte Carlo computer simulations. The performance, advantages, and drawbacks of popular statistical methods for animal oncology studies are reported. The recommended methods are applied to a real data set.  相似文献   

14.
Using Markov chain representations, we evaluate and compare the performance of cumulative sum (CUSUM) and Shiryayev–Roberts methods in terms of the zero- and steady-state average run length and worst-case signal resistance measures. We also calculate the signal resistance values from the worst- to the best-case scenarios for both the methods. Our results support the recommendation that Shewhart limits be used with CUSUM and Shiryayev–Roberts methods, especially for low values of the size of the shift in the process mean for which the methods are designed to detect optimally.  相似文献   

15.
A block cipher is one of the most common forms of algorithms used for data encryption. This paper describes an efficient set of statistical methods for analysing the security of these algorithms under the black-box approach. The procedures can be fully automated, which provides the designer or user of a block cipher with a useful set of tools for security analysis.  相似文献   

16.
Twenty-five years ago the use of Bayesian methods in Pharmaceutical R&D was non-existent. Today that is no longer true. In this paper I describe my own personal journey along the road of discovery of Bayesian methods to routine use in the pharmaceutical industry.  相似文献   

17.
The Statisticians in the Pharmaceutical Industry Toxicology Special Interest Group has collated and compared statistical analysis methods for a number of toxicology study types including general toxicology, genetic toxicology, safety pharmacology and carcinogenicity. In this paper, we present the study design, experimental units and analysis methods.  相似文献   

18.
Propensity score analysis (PSA) is a technique to correct for potential confounding in observational studies. Covariate adjustment, matching, stratification, and inverse weighting are the four most commonly used methods involving propensity scores. The main goal of this research is to determine which PSA method performs the best in terms of protecting against spurious association detection, as measured by Type I error rate, while maintaining sufficient power to detect a true association, if one exists. An examination of these PSA methods along with ordinary least squares regression was conducted under two cases: correct PSA model specification and incorrect PSA model specification. PSA covariate adjustment and PSA matching maintain the nominal Type I error rate, when the PSA model is correctly specified, but only PSA covariate adjustment achieves adequate power levels. Other methods produced conservative Type I Errors in some scenarios, while liberal Type I error rates were observed in other scenarios.  相似文献   

19.
邱东 《统计研究》2001,18(4):16-19
 台湾统计学者谢邦昌教授曾经有过这样的认识:任何领域都有其上中下游,统计也不例外。但统计的上游中,一些功力高强的学者不见得愿意帮助中下游解决问题,而中下游又感觉上游遥不可及,不敢把问题高告诉上游,觉得上游的理论太过高深。于是统计的上中下游出现了断层,影响了统计的发展(参见《统计的出世观与入世观》,《中国统计》1999年第二期)。对此我深有同感。这里先对统计做一个层次划分,然后从需求和供给两个方面对其进行分析,以期促进统计研究中的学科定位和工作定位。  相似文献   

20.
Optimal batch-sequential designs are difficult to compute, even when sufficient statistics and relatively uncomplicated loss functions simplify the calculations required. While backward induction applies, its difficulty grows exponentially in the number of stages, while a recently developed forward algorithm grows only linearly, but involves a maximization over a rather flat surface. This paper explores a hybrid algorithm, partially backward induction, partially forward, that has some of the advantages of each.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号