首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper reviews the impact of the computer on the analysis and interpretation of data. It suggests the need for professonal statisticians to recognize that almost all future analysis of data will be carried out by non-statisticians with the aid of statistical program packages. Therefore, the emphasis of statistical training for scientists, engineers, administrators and decision-makers should be on the design of data collection and the choice of appropriate methods of analysis. Both in the teaching of statistics and in the development of computer programs for statistical analysis there are important and urgent tasks to be addressed by professional statisticians.  相似文献   

2.
A vast collection of reusable mathematical and statistical software is now available for use by scientists and engineers in their modeling efforts. This software represents a significant source of mathematical expertise, created and maintained at considerable expense. Unfortunately, the collection is so heterogeneous that it is a tedious and error-prone task simply to determine what software is available to solve a given problem. In mathematical problem solving environments of the future such questions will be fielded by expert software advisory systems. One way for such systems to systematically associate available software with the problems they solve is to use a problem classification system. In this paper we describe a detailed tree-structured problem-oriented classification system appropriate for such use.  相似文献   

3.
Materials formulation and processing research are important industrial processes and most materials know-how comes from physical experiments. Our impression, based on discussions with materials scientists, is that statistically planned experiments are infrequently used in materials research. This scientific and engineering area provides an excellent opportunity for both using the available techniques of statistically planned experiments, including mixture experiments, and identifying opportunities for collaborative research leading to further advances in statistical methods for scientists and engineers. This paper describes an application of SchefK's (1958) simplex approach for mixture experiments to formulation of high-temperature superconducting compounds. This example has given us better appreciation of the needs of materials scientists and has provided us opportunities for further collaborative research.  相似文献   

4.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

5.
At Los Alamos National Laboratory (LANL), statistical scientists develop solutions for a variety of national security challenges through scientific excellence, typically as members of interdisciplinary teams. At LANL, mentoring is actively encouraged and practiced to develop statistical skills and positive career-building behaviors. Mentoring activities targeted at different career phases from student to junior staff are an important catalyst for both short and long term career development. This article discusses mentoring strategies for undergraduate and graduate students through internships as well as for postdoctoral research associates and junior staff. Topics addressed include project selection, progress, and outcome; intellectual and social activities that complement the student internship experience; key skills/knowledge not typically obtained in academic training; and the impact of such internships on students’ careers. Experiences and strategies from a number of successful mentorships are presented. Feedback from former mentees obtained via a questionnaire is incorporated. These responses address some of the benefits the respondents received from mentoring, helpful contributions and advice from their mentors, key skills learned, and how mentoring impacted their later careers.  相似文献   

6.
Consider teaching a three-day short course in modern regression methodology to a small group consisting of engineers, social scientists, managers (who often have a business background), and medical researchers. Teaching such a course offers a different set of problems and challenges than encountered when teaching a course in the university setting. The instructor must be highly organized, well prepared, and flexible for the successful presentation of an intense short course. Ten suggestions are given that will increase the likelihood that the course will meet the educational objectives of such a diverse audience.  相似文献   

7.
ABSTRACT

Most statistical analyses use hypothesis tests or estimation about parameters to form inferential conclusions. I think this is noble, but misguided. The point of view expressed here is that observables are fundamental, and that the goal of statistical modeling should be to predict future observations, given the current data and other relevant information. Further, the prediction of future observables provides multiple advantages to practicing scientists, and to science in general. These include an interpretable numerical summary of a quantity of direct interest to current and future researchers, a calibrated prediction of what’s likely to happen in future experiments, a prediction that can be either “corroborated” or “refuted” through experimentation, and avoidance of inference about parameters; quantities that exists only as convenient indices of hypothetical distributions. Finally, the predictive probability of a future observable can be used as a standard for communicating the reliability of the current work, regardless of whether confirmatory experiments are conducted. Adoption of this paradigm would improve our rigor for scientific accuracy and reproducibility by shifting our focus from “finding differences” among hypothetical parameters to predicting observable events based on our current scientific understanding.  相似文献   

8.
The identification of active effects in supersaturated designs (SSDs) constitutes a problem of considerable interest to both scientists and engineers. The complicated structure of the design matrix renders the analysis of such designs a complicated issue. Although several methods have been proposed so far, a solution to the problem beyond one or two active factors seems to be inadequate. This article presents a heuristic approach for analyzing SSDs using the cumulative sum control chart (CUSUM) under a sure independence screening approach. Simulations are used to investigate the performance of the method comparing the proposed method with other well-known methods from the literature. The results establish the powerfulness of the proposed methodology.  相似文献   

9.
ABSTRACT

Many controversies in statistics are due primarily or solely to poor quality control in journals, bad statistical textbooks, bad teaching, unclear writing, and lack of knowledge of the historical literature. One way to improve the practice of statistics and resolve these issues is to do what initiators of the 2016 ASA statement did: take one issue at a time, have extensive discussions about the issue among statisticians of diverse backgrounds and perspectives and eventually develop and publish a broadly supported consensus on that issue. Upon completion of this task, we then move on to deal with another core issue in the same way. We propose as the next project a process that might lead quickly to a strong consensus that the term “statistically significant” and all its cognates and symbolic adjuncts be disallowed in the scientific literature except where focus is on the history of statistics and its philosophies and methodologies. Calculation and presentation of accurate p-values will often remain highly desirable though not obligatory. Supplementary materials for this article are available online in the form of an appendix listing the names and institutions of 48 other statisticians and scientists who endorse the principal propositions put forward here.  相似文献   

10.
This is an expository article. Here we show how the successfully used Kalman filter, popular with control engineers and other scientists, can be easily understood by statisticians if we use a Bayesian formulation and some well-known results in multivariate statistics. We also give a simple example illustrating the use of the Kalman filter for quality control work.  相似文献   

11.
12.
余芳东 《统计研究》2012,29(8):108-112
 自1993年以来,我国先后5次以不同形式开展国际比较项目(ICP)调查活动,更深入、更广泛地融入到国际统计体系之中。每次调查的基准期、调查范围、比较对象、比较方法、组织方式各不相同,比较结果也各不相同。我国参加ICP的演变历程见证了政府统计发展和能力提高的过程。  相似文献   

13.
Proactive evaluation of drug safety with systematic screening and detection is critical to protect patients' safety and important in regulatory approval of new drug indications and postmarketing communications and label renewals. In recent years, quite a few statistical methodologies have been developed to better evaluate drug safety through the life cycle of the product development. The statistical methods for flagging safety signals have been developed in two major areas – one for data collected from spontaneous reporting system, mostly in the postmarketing area, and the other for data from clinical trials. To our knowledge, the methods developed for one area have not been applied to the other one so far. In this article, we propose to utilize all such methods for flagging safety signals in both areas regardless of which specific area they were originally developed for. Therefore, we selected eight typical methods, that is, proportional reporting ratios, reporting odds ratios, the maximum likelihood ratio test, Bayesian confidence propagation neural network method, chi‐square test for rates comparison, Benjamini and Hochberg procedure, new double false discovery rate control procedure, and Bayesian hierarchical mixture model for systematic comparison through simulations. The Benjamini and Hochberg procedure and new double false discovery rate control procedure perform best overall in terms of sensitivity and false discovery rate. The likelihood ratio test also performs well when the sample sizes are large. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
SUMMARY This paper reviews a number of extreme value models which have been applied to corrosion problems. The techniques considered are used to model and predict the statistical behaviour of corrosion extremes, such as the largest pit, thinnest wall, maximum penetration or similar assessment of corrosion phenomenon. These techniques can be applied to measurements over a regular grid or to measurements of selected extremes, and can be adapted to accommodate all values over a selected threshold, or a selected number of the largest values-or only the single largest value. Data can come from one coupon or several coupons, and can be modelled to allow for dependence on environmental conditions, surface area examined, and duration of exposure or of experimentation. The techniquesare demonstrated on data from laboratory experiments and also on data collected in an industrial context.  相似文献   

15.
Monitoring health care performance outcomes such as post-operative mortality rates has recently become more common, spurring new statistical methodologies designed for this purpose. One such methodology is the Risk-adjusted Cumulative Sum chart (RA-CUSUM) for monitoring binary outcomes such as mortality after cardiac surgery. When building RA-CUSUMs, independence and model correctness are assumed. We carry out a simulation study to examine the effect of violating these two assumptions on the chart's performance.  相似文献   

16.
Statistics as data is ancient, but as a discipline of study and research it has a short history. Courses leading to degrees in statistics have been introduced in universities some sixty to seventy years ago. They were not considered to constitute a basic discipline with a subject matter of its own. However, during the last seventy five years, it has developed as a powerful blend of science, technology and art for solving problems in all areas of human endeavor. Now-a-days statistics is used in scientific research, economic development through optimum use of resources, increasing industrial productivity, medical diagnosis, legal practice, disputed authorship, and optimum decision making at individual and institutional levels. What is the future of statistics in the coming millennium dominated by information technology encompassing the whole of communications, interaction with intelligent systems, massive data bases, and complex information processing networks? The current statistical methodology based on probabilistic models applied on small data sets appears to be inadequate to meet the needs of the society in terms of quick processing of data and making the information available for practical purposes. Adhoc methods are being put forward under the title Data Mining by computer scientists and engineers to meet the needs of customers. The paper reviews the current state of the art in statistics and discusses possible future developments considering the availability of large data sets, enormous computing power and efficient optimization techniques using genetic algorithms and neural networks.

  相似文献   

17.
苏为华 《统计研究》1996,13(5):34-37
论统计指标的构造过程苏为华ABSTRACTTheconstructionofstatisticalindicatorsisaprocessofcomplicatedlogicthind-ingthatcanbedividedintoaseriesof...  相似文献   

18.
General augmentation techniques in experimental design, such as the foldover and the semifold, have been a common practice in industrial experimentation for years. Even though these techniques are extremely effective in maintaining balance and near orthogonality, they possess disadvantages, such as the inability to decouple specific terms and inefficiency. This article aims for a sequential experimentation approach capable of overcoming the drawbacks of the general methods while maintaining some of their benefits. We focus on the development of an algorithm for sequential augmentation of fractional factorial designs resolution III. Advantages, limitations, and potential benefits of the new method are provided.  相似文献   

19.
Immuno‐oncology has emerged as an exciting new approach to cancer treatment. Common immunotherapy approaches include cancer vaccine, effector cell therapy, and T‐cell–stimulating antibody. Checkpoint inhibitors such as cytotoxic T lymphocyte–associated antigen 4 and programmed death‐1/L1 antagonists have shown promising results in multiple indications in solid tumors and hematology. However, the mechanisms of action of these novel drugs pose unique statistical challenges in the accurate evaluation of clinical safety and efficacy, including late‐onset toxicity, dose optimization, evaluation of combination agents, pseudoprogression, and delayed and lasting clinical activity. Traditional statistical methods may not be the most accurate or efficient. It is highly desirable to develop the most suitable statistical methodologies and tools to efficiently investigate cancer immunotherapies. In this paper, we summarize these issues and discuss alternative methods to meet the challenges in the clinical development of these novel agents. For safety evaluation and dose‐finding trials, we recommend the use of a time‐to‐event model‐based design to handle late toxicities, a simple 3‐step procedure for dose optimization, and flexible rule‐based or model‐based designs for combination agents. For efficacy evaluation, we discuss alternative endpoints/designs/tests including the time‐specific probability endpoint, the restricted mean survival time, the generalized pairwise comparison method, the immune‐related response criteria, and the weighted log‐rank or weighted Kaplan‐Meier test. The benefits and limitations of these methods are discussed, and some recommendations are provided for applied researchers to implement these methods in clinical practice.  相似文献   

20.
This paper describes an attempt to develop a statistical expert system (FILTEX) as an intelligent aid for time-series filter design. To this end the knowledge of the filter design strategy is represented in Prolog and coupled with numerical routines of a general purpose signal processing package. This knowledge-based system is conceived as a set of independent knowledge sources integrated into a system by a blackboard mechanism which embodies overall control of the filter design process. Modularity and flexibility of knowledge representation in such a framework preserve usability of the evolving system during its development from the original numerical package to an expert system for filter design. This approach seems to be more flexible than the use of shells and less time consuming than building from scratch. A novel method for incorporating classical statistical information into an uncertainty management mechanism is presented. Experimental results confirm the feasibility of the approach and set directions for further research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号