首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A vast collection of reusable mathematical and statistical software is now available for use by scientists and engineers in their modeling efforts. This software represents a significant source of mathematical expertise, created and maintained at considerable expense. Unfortunately, the collection is so heterogeneous that it is a tedious and error-prone task simply to determine what software is available to solve a given problem. In mathematical problem solving environments of the future such questions will be fielded by expert software advisory systems. One way for such systems to systematically associate available software with the problems they solve is to use a problem classification system. In this paper we describe a detailed tree-structured problem-oriented classification system appropriate for such use.  相似文献   

2.
In this article, we extended the widely used Bland-Altman graphical technique of comparing two measurements in clinical studies to include an analytical approach using a linear mixed model. The proposed statistical inferences can be conducted easily by commercially available statistical software such as SAS. The linear mixed model approach was illustrated using a real example in a clinical nursing study of oxygen saturation measurements, when functional oxygen saturation was compared against fractional oxy-hemoglobin.  相似文献   

3.
The World Wide Web (WWW) represents a powerful tool for furthering the development and practice of statistics. The GASP (Globally Accessible Statistical Procedures) WWW site has been set up as a primary listing of statistical procedures which can be used over the WWW. This article highlights several possible approaches for making a procedure WWW accessible. These approaches effectively solve many of the problems typically encountered when using a new statistical procedure. Applying the methods discussed, any statistical technique can be made available to anyone with a forms- or Java-capable WWW browser. Procedures can be delivered in a virtually platform-independent manner with only minimal requirements on a user's hardware or software.  相似文献   

4.
The bootstrap is a powerful non-parametric statistical technique for making probability-based inferences about a population parameter. Through a Monte-Carlo resampling simulation, bootstrapping empirically generates a statistic's entire distribution. From this simulated distribution, inferences can be made about a population parameter. Assumptions about normality are not required. In general, despite its power, bootstrapping has been used relatively infrequently in social science research, and this is particularly true for business research. This under-utilization is likely due to a combination of a general lack of understanding of the bootstrap technique and the difficulty with which it has traditionally been implemented. Researchers in the various fields of business should be familiar with this powerful statistical technique. The purpose of this paper is to explain how this technique works using Lotus 1-2-3, a software package with which business people are very familiar.  相似文献   

5.
The role of statistics in quality and productivity improvement depends on certain philosophical issues that the author believes have been inadequately addressed. Three such issues are as follows: (1) what is the role of statistics in the process of investigation and discovery; (2) how can we extrapolate results from the particular to the general; and (3) how can we evaluate possible management changes so that they truly benefit an organization? Therefore, statistical methods appropriate to investigation and discovery are discussed as distinct from those appropriate to the testing of an already discovered solution. It is shown how the manner in which the tentative solution has been arrived at determines the assurance with which experimental conclusions can be extrapolated to the application in mind. Whether or not statistical methods and training can have any impact depends on the system of management. A vector representation which can help predict the consequences of changes in management strategy is discussed. This can help to realign policies so that members of an organization can better work together for the benefit of the organization.  相似文献   

6.
Box's paper helicopter has been used to teach experimental design for more than a decade. It is simple, inexpensive, and provides real data for an involved, multifactor experiment. Unfortunately it can also further an all-too-common practice that Professor Box himself has repeatedly cautioned against, namely ignoring the fundamental science while rushing to solve problems that may not be sufficiently understood. Often this slighting of the science so as to get on with the statistics is justified by referring to Box's oft-quoted maxim that “All models are wrong, however some are useful.” Nevertheless, what is equally true, to paraphrase both Professor Box and George Orwell, is that “All models are wrong, but some are more wrong than others.” To experiment effectively it is necessary to understand the relevant science so as to distinguish between what is usefully wrong, and what is dangerously wrong.

This article presents an improved analysis of Box's helicopter problem relying on statistical and engineering knowledge and shows that this leads to an enhanced paper helicopter, requiring fewer experimental trails and achieving superior performance. In fact, of the 20 experimental trials run for validation—10 each of the proposed aerodynamic design and the conventional full factorial optimum—the longest 10 flight times all belong to the aerodynamic optimum, while the shortest 10 all belong to the conventional full factorial optimum. I further discuss how ancillary engineering knowledge can be incorporated into thinking about—and teaching—experimental design.  相似文献   

7.
The increase of statistical software applications for PCs is caused by decreasing hardware costs and dramatically enhanced PC performance. Whereas in the past the domain of statistical computing has been reserved to mainframe solutions, a great number of new software packages for PCs have come out in the last five years. Therefore, the producers of established mainframe software were also forced to offer PC-based solutions. By limiting a market analysis to products with a medium set of well known statistical methods, the immense number of available products is reduced to about fifty systems. We ordered an evaluation copy of these systems to test the numerical quality, the system speed, and the performance of several procedures. Seventeen packages were made available for an extensive examination. This paper will (1) discuss the problems and the solutions of obtaining a complete and correct datamatrix that describes the entire market and (2) present the results of a comparative market analysis.  相似文献   

8.
The rapid response to the requirements of customers and markets promotes the concurrent engineering (CE) technique in product and process design. The decision making for process quality target, SPC method, sampling plan, and control chart parameter design can be done at the stage of process quality plan based on historical data and process knowledge database. Therefore, it is a reasonable trend to introduce the concepts and achievements on process quality evaluation and process capability analysis, CE, and SPC techniques into process plan and tolerance design. A new systematic method for concurrent design of process quality, statistical tolerance (ST), and control chart is presented based on a NSFC research program. A set of standardized process quality indices (PQIs) for variables is introduced for meeting the measurement and evaluation to process yield, process centering, and quality loss. This index system that has relatively strong compatibility and adaptability is based on raisonne grading by using the series of preferred numbers and arithmetical progression. The expected process quality based on this system can be assured by a standardized interface between PQIs and SPC, that is, quality-oriented statistical tolerance zone. A quality-oriented ST and SPC approach that quantitatively specifies what a desired process is and how to assure it will realize the optimal control for a process toward a predetermined quality target.  相似文献   

9.
A common population characteristic of interest in animal ecology studies pertains to the selection of resources. That is, given the resources available to animals, what do they ultimately choose to use? A variety of statistical approaches have been employed to examine this question and each has advantages and disadvantages with respect to the form of available data and the properties of estimators given model assumptions. A wealth of high resolution telemetry data are now being collected to study animal population movement and space use and these data present both challenges and opportunities for statistical inference. We summarize traditional methods for resource selection and then describe several extensions to deal with measurement uncertainty and an explicit movement process that exists in studies involving high-resolution telemetry data. Our approach uses a correlated random walk movement model to obtain temporally varying use and availability distributions that are employed in a weighted distribution context to estimate selection coefficients. The temporally varying coefficients are then weighted by their contribution to selection and combined to provide inference at the population level. The result is an intuitive and accessible statistical procedure that uses readily available software and is computationally feasible for large datasets. These methods are demonstrated using data collected as part of a large-scale mountain lion monitoring study in Colorado, USA.  相似文献   

10.
Various methods have been proposed for smoothing under the monotonicity constraint. We review the literature and implement an approach of monotone smoothing with B-splines for a generalized linear model response. The approach is expressed as a quadratic programming problem and is easily solved using the statistical software R. In a simulation study, we find that the approach performs better than other approaches with much faster computation time. The approach can also be used for smoothing under other shape constraints or mixed constraints. Supplementary materials of the appendices and R code to implement the developed approach is available online.  相似文献   

11.
A statistical software package is a collaborative effort between a program's authors and users. When statistical analysis took place exclusively on mainframe computers, the entire statistical community was served by some three to six major packages, which helped to ensure that program errors would be quickly uncovered and corrected. The current trend toward performing statistical analysis on microcomputers has resulted in an explosion of software of varying quality, with more than 200 packages for the IBM PC alone. Since all of these programs are competing for the same base of knowledgeable users, the number of sophisticated users per package is dramatically less than for mainframe packages; the net result is that problems in any particular package are more likely to go unnoticed and uncorrected. For example, the most widely used shareware package contains major errors that should cause it to be rejected out of hand, and three best-selling packages analyze unbalanced two-factor experiments using an approximate technique originally developed for hand calculation. Several strategies are offered to help author and user reveal any problems that might be present in their software.  相似文献   

12.
Most approaches to applying knowledge-based techniques for data analyses concentrate on the context-independent statistical support. EXPLORA however is developed for the subject-specific interpretation with regard to the contents of the data to be analyzed (i.e. content interpretation). Therefore its knowledge base includes also the objects and semantic relations of the real system that produces the data. In this paper we describe the functional model representing the process of content interpretation, summarize the software architecture of the system and give some examples of its applications by pilot-users in survey analysis. EXPLORA addresses applications with data produced regularly which have to be analyzed in a routine way. The system systematically searches for statistical results (facts) to detect relations which possibly could be overlooked by a human analyst. On the other hand EXPLORA will help overcome the large bulk of information which today is usually still produced when presenting the data. Therefore a second knowledge process of content interpretation consists in discovering messages about the data by condensing the facts. Approaches for inductive generalization which have been developed for machine learning are utilized to identify common values of attributes of the objects to which the facts relate. At a later stage the system searches for interesting facts by applying redundancy rules and domaindependent selection rules. EXPLORA formulates the messages in terms of the domain, groups and orders them and even provides flexible navigations in the fact spaces.  相似文献   

13.
A general method is proposed by which nonnormally distributed data can be transformed to achieve approximate normality. The method uses an empirical nonlinear data-fitting approach and can be applied to a broad class of transformations including the Box-Cox, arcsine, generalized logit, and Weibull-type transformations. It is easy to implement using standard statistical software packages. Several examples are provided.  相似文献   

14.
Summary.  Following devolution, differences developed between UK countries in systems of measuring performance against a common target that ambulance services ought to respond to 75% of calls for what may be immediately life threatening emergencies (category A calls) within 8 minutes. Only in England was this target integral to a ranking system of 'star rating', which inflicted reputational damage on services that failed to hit targets, and only in England has this target been met. In other countries, the target has been missed by such large margins that services would have been publicly reported as failing, if they had been covered by the English system of star ratings. The paper argues that this case-study adds to evidence from comparisons of different systems of hospital performance measurement that, to have an effect, these systems need to be designed to inflict reputational damage on those that have performed poorly; and it explores implications of this hypothesis. The paper also asks questions about the adequacy of systems of performance measurement of ambulance services in UK countries.  相似文献   

15.
Concerning the task of integrating census and survey data from different sources as it is carried out by supranational statistical agencies, a formal metadata approach is investigated which supports data integration and table processing simultaneously. To this end, a metadata model is devised such that statistical query processing is accomplished by means of symbolic reasoning on machine-readable, operative metadata. As in databases, statistical queries are stated as formal expressions specifying declaratively what the intended output is; the operations necessary to retrieve appropriate available source data and to aggregate source data into the requested macrodata are derived mechanically. Using simple mathematics, this paper focuses particularly on the metadata model devised to harmonize semantically related data sources as well as the table model providing the principal data structure of the proposed system. Only an outline of the general design of a statistical information system based on the proposed metadata model is given and the state of development is summarized briefly.  相似文献   

16.
Response surface experimentation is an integral part of the development of a new process or product, but the relatively efficient statistical methodologies for such experimentation are underutilized by research and development scientists and engineers because of a lack of knowledge and/or understanding of these methodologies. To help to increase its utilization, a simplified approach to one such statistical methodology, known as the determination of optimum conditions, has been developed which can be used by scientists and engineers with a minimum of statistical knowledge.  相似文献   

17.
Two useful statistical methods for generating a latent variable are described and extended to incorporate polytomous data and additional covariates. Item response analysis is not well-known outside its area of application, mainly because the procedures to fit the models are computer intensive and not routinely available within general statistical software packages. The linear score technique is less computer intensive, straightforward to implement and has been proposed as a good approximation to item response analysis. Both methods have been implemented in the standard statistical software package GLIM 4.0, and are compared to determine their effectiveness.  相似文献   

18.
Wang  Dewei  Jiang  Chendi  Park  Chanseok 《Lifetime data analysis》2019,25(2):341-360

The load-sharing model has been studied since the early 1940s to account for the stochastic dependence of components in a parallel system. It assumes that, as components fail one by one, the total workload applied to the system is shared by the remaining components and thus affects their performance. Such dependent systems have been studied in many engineering applications which include but are not limited to fiber composites, manufacturing, power plants, workload analysis of computing, software and hardware reliability, etc. Many statistical models have been proposed to analyze the impact of each redistribution of the workload; i.e., the changes on the hazard rate of each remaining component. However, they do not consider how long a surviving component has worked for prior to the redistribution. We name such load-sharing models as memoryless. To remedy this potential limitation, we propose a general framework for load-sharing models that account for the work history. Through simulation studies, we show that an inappropriate use of the memoryless assumption could lead to inaccurate inference on the impact of redistribution. Further, a real-data example of plasma display devices is analyzed to illustrate our methods.

  相似文献   

19.
A well-designed clinical trial requires an appropriate sample size with adequate statistical power to address trial objectives. The statistical power is traditionally defined as the probability of rejecting the null hypothesis with a pre-specified true clinical treatment effect. This power is a conditional probability conditioned on the true but actually unknown effect. In practice, however, this true effect is never a fixed value. Thus, we discuss a newly proposed alternative to this conventional statistical power: statistical assurance, defined as the unconditional probability of rejecting the null hypothesis. This kind of assurance can then be obtained as an expected power where the expectation is based on the prior probability distribution of the unknown treatment effect, which leads to the Bayesian paradigm. In this article, we outline the transition from conventional statistical power to the newly developed assurance and discuss the computations of assurance using Monte Carlo simulation-based approach.  相似文献   

20.
Most system identification approaches and statistical inference methods rely on the availability of the analytic knowledge of the probability distribution function of the system output variables. In the case of dynamic systems modelled by hidden Markov chains or stochastic nonlinear state-space models, these distributions as well as that of the state variables themselves, can be unknown or untractable. In that situation, the usual particle Monte Carlo filters for system identification or likelihood-based inference and model selection methods have to rely, whenever possible, on some hazardous approximations and are often at risk. This review shows how a recent nonparametric particle filtering approach can be efficiently used in that context, not only for consistent filtering of these systems but also to restore these statistical inference methods, allowing, for example, consistent particle estimation of Bayes factors or the generalisation of model parameter change detection sequential tests. Real-life applications of these particle approaches to a microbiological growth model are proposed as illustrations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号