首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
Correlation and regression analysis are often used to infer causal relationships in dynamic systems, even though computed on cross-sectional static data. In education these analytic techniques have been used to support assertions that school-controlled variables make little contribution to student learning. Critics of these assertions point to the low quality of the data, but it may be that the techniques themselves are inappropriate for the development of inferences of causality. This study simulated four possible models of dynamic relationships between family and school inputs and achievement outcomes. The models were run for five periods. Data generated were submitted to correlation and regression analysis. Both unique variance and regression coefficient indicators failed to describe reliably causal relationships built into the models. Conclusion: complex systems resist simplistic analyses.  相似文献   

2.
This paper provides a comparative study of machine learning techniques for two-group discrimination. Simulated data is used to examine how the different learning techniques perform with respect to certain data distribution characteristics. Both linear and nonlinear discrimination methods are considered. The data has been previously used in the comparative evaluation of a number of techniques and helps relate our findings across a range of discrimination techniques.  相似文献   

3.
Charles W Gross 《Omega》1978,6(6):531-539
This paper reports the results of a study involving the application of management science techniques to the area of jury research. A public opinion survey was conducted, entered in evidence in a motion for change of venue, and the survey data used to develop a jury selection model used by the defense in a second-degree murder trial. The case had involved considerable pre-trial publicity. From a defense perspective, the defendant received the least undesirable guilty verdict. Management science techniques employed involved statistical sampling, linear discriminant function, and regression analysis. Suggestions for other applications in jury research are offered.  相似文献   

4.
The purpose of this research is to show the usefulness of three relatively simple nonlinear classification techniques for policy-capturing research where linear models have typically been used. This study uses 480 cases to assess the decision-making process used by 24 experienced national bank examiners in classifying commercial loans as acceptable or questionable. The results from multiple discriminant analysis (a linear technique) are compared to those of chi-squared automatic interaction detector analysis (a search technique), log-linear analysis, and logit analysis. Results show that while the four techniques are equally accurate in predicting loan classification, chi-squared automatic interaction detector analysis (CHAID) and log-linear analysis enable the researcher to analyze the decision-making structure and examine the “human” variable within the decision-making process. Consequently, if the sole purpose of research is to predict the decision maker's decisions, then any one of the four techniques turns out to be equally useful. If, however, the purpose is to analyze the decision-making process as well as to predict decisions, then CHAID or log-linear techniques are more useful than linear model techniques.  相似文献   

5.
A number of recent studies have compared the performance of neural networks (NNs) to a variety of statistical techniques for the classification problem in discriminant analysis. The empirical results of these comparative studies indicate that while NNs often outperform the more traditional statistical approaches to classification, this is not always the case. Thus, decision makers interested in solving classification problems are left in a quandary as to what tool to use on a particular data set. We present a new approach to solving classification problems by combining the predictions of a well-known statistical tool with those of an NN to create composite predictions that are more accurate than either of the individual techniques used in isolation.  相似文献   

6.
Asymmetric loss functions often arise where regression estimates are used in management decision making. However, regression analysis has traditionally made use of a symmetric loss function in which the cost of underestimating equals the cost of overestimating. If the loss function is linear and the degree of asymmetry can be determined, the asymmetric regression method presented in this paper can be employed to find appropriate regression estimates. Asymmetric regression analysis requires no unusual data inputs other than an estimate of the cost asymmetry and can be performed efficiently using standard linear programming techniques.  相似文献   

7.
The basic concepts and application of spectral analysis are explained. Stationary time series and autocorrelation are first defined. Autocorrelation is related to the familiar concepts of variance and covariance. The use of autocorrelation analysis is explained in estimating the interdependent relationship of a time series over discrete time lags. In order to measure the behavior of the time series using autocorrelation, it would be necessary to examine a very large number of autocorrelation lags. Alternatively, the technique of Fourier analysis can be used to transform the autocorrelation function of the time series into a continuous function, termed a spectrum. The spectrum has a one to one correspondence to the autocorrelation for the time series and has the advantage of representing all possible autocorrelations over the discrete time lags. The spectrum can then be examined as a measure of the behavior of the time series. Spectral analysis indicates the reliability of the analysis of autocorrelated variables when familiar statistical techniques such as sample means and variances are used. The application of spectral analysis to management science problems in three general areas is illustrated: (1) inventory demand, (2) transportation simulation, and (3) stock market price behavior. Spectral analysis was used to detect cycles and trends in the data. Analyses were focused on the spectrum which provides a measure of the relative contribution of cycles in a band of frequencies to the total variance of the data.  相似文献   

8.
陈宇新  张喆  杨涵方 《管理科学》2018,21(11):61-75
本研究基于2009年—2015年UTD-24和FT-45使用中国数据源的文献数据库, 从研究领域、研究作者和关键词角度分析了基于中国数据的国际管理研究学术进展的现状和趋势, 并进一步基于数据挖掘和可视化技术提出描述学术研究发展趋势的一系列挖掘方法和呈现模式, 分析了商业管理学术领域对于中国数据源的关注程度、关注点以及研究模式, 以期更好地理解近年国际管理理论研究中中国数据的影响力和研究发展图景.在研究领域层面, 本文侧重在学科领域、期刊和时间维度上对基于中国数据的关注程度进行分析.在研究作者层面, 本文侧重于发现关注中国数据源的研究学者的来源, 发现不同来源的研究者对中国数据源的不同研究视角及其之间的合作偏好.在关键词分析层面, 本文旨在发现基于中国数据学术研究中的热点问题, 把握研究主题随着时间的演变趋势.此外本文还感兴趣于关键词内部之间的关联性, 本文通过关键词与地域、研究领域的关联分析探索发掘各个领域或者地域对于研究主题的研究偏好, 通过关键词内部的关联性分析获得并发掘出潜在的可以协同研究的主题.  相似文献   

9.
Many different techniques have been proposed for performing uncertainty and sensitivity analyses on computer models for complex processes. The objective of the present study is to investigate the applicability of three widely used techniques to three computer models having large uncertainties and varying degrees of complexity in order to highlight some of the problem areas that must be addressed in actual applications. The following approaches to uncertainty and sensitivity analysis are considered: (1) response surface methodology based on input determined from a fractional factorial design; (2) Latin hypercube sampling with and without regression analysis; and (3) differential analysis. These techniques are investigated with respect to (1) ease of implementation, (2) flexibility, (3) estimation of the cumulative distribution function of the output, and (4) adaptability to different methods of sensitivity analysis. With respect to these criteria, the technique using Latin hypercube sampling and regression analysis had the best overall performance. The models used in the investigation are well documented, thus making it possible for researchers to make comparisons of other techniques with the results in this study.  相似文献   

10.
Damage models for natural hazards are used for decision making on reducing and transferring risk. The damage estimates from these models depend on many variables and their complex sometimes nonlinear relationships with the damage. In recent years, data‐driven modeling techniques have been used to capture those relationships. The available data to build such models are often limited. Therefore, in practice it is usually necessary to transfer models to a different context. In this article, we show that this implies the samples used to build the model are often not fully representative for the situation where they need to be applied on, which leads to a “sample selection bias.” In this article, we enhance data‐driven damage models by applying methods, not previously applied to damage modeling, to correct for this bias before the machine learning (ML) models are trained. We demonstrate this with case studies on flooding in Europe, and typhoon wind damage in the Philippines. Two sample selection bias correction methods from the ML literature are applied and one of these methods is also adjusted to our problem. These three methods are combined with stochastic generation of synthetic damage data. We demonstrate that for both case studies, the sample selection bias correction techniques reduce model errors, especially for the mean bias error this reduction can be larger than 30%. The novel combination with stochastic data generation seems to enhance these techniques. This shows that sample selection bias correction methods are beneficial for damage model transfer.  相似文献   

11.
Bayesian network methodology is used to model key linkages of the service‐profit chain within the context of transportation service satisfaction. Bayesian networks offer some advantages for implementing managerially focused models over other statistical techniques designed primarily for evaluating theoretical models. These advantages are (1) providing a causal explanation using observable variables within a single multivariate model, (2) analysis of nonlinear relationships contained in ordinal measurements, (3) accommodation of branching patterns that occur in data collection, and (4) the ability to conduct probabilistic inference for prediction and diagnostics with an output metric that can be understood by managers and academics. Sample data from 1,101 recent transport service customers are utilized to select and validate a Bayesian network and conduct probabilistic inference.  相似文献   

12.
This article proposes a methodology for incorporating electrical component failure data into the human error assessment and reduction technique (HEART) for estimating human error probabilities (HEPs). The existing HEART method contains factors known as error-producing conditions (EPCs) that adjust a generic HEP to a more specific situation being assessed. The selection and proportioning of these EPCs are at the discretion of an assessor, and are therefore subject to the assessor's experience and potential bias. This dependence on expert opinion is prevalent in similar HEP assessment techniques used in numerous industrial areas. The proposed method incorporates factors based on observed trends in electrical component failures to produce a revised HEP that can trigger risk mitigation actions more effectively based on the presence of component categories or other hazardous conditions that have a history of failure due to human error. The data used for the additional factors are a result of an analysis of failures of electronic components experienced during system integration and testing at NASA Goddard Space Flight Center. The analysis includes the determination of root failure mechanisms and trend analysis. The major causes of these defects were attributed to electrostatic damage, electrical overstress, mechanical overstress, or thermal overstress. These factors representing user-induced defects are quantified and incorporated into specific hardware factors based on the system's electrical parts list. This proposed methodology is demonstrated with an example comparing the original HEART method and the proposed modified technique.  相似文献   

13.
The purpose of this article is to present a comprehensive 25-year review of the incorporation of levels of analysis into conceptual and empirical leadership research published within Leadership Quarterly throughout its history. We assessed the population of Leadership Quarterly's research (790 research articles) on four key levels of analysis-based issues: (1) explicit statement of the focal level(s) of analysis; (2) appropriate measurement given level of constructs; (3) use of a multi-level data analysis technique; and, (4) alignment of theory and data. Prior reviews regarding levels of analysis incorporation into leadership research have been limited to major research domains. Results revealed that while both conceptual and empirical articles only explicitly state the focal level of analysis in approximately one-third of the articles, appropriate levels-based measurement and alignment between theory and data are relatively strong areas of achievement for the articles within Leadership Quarterly. Multi-level data analysis techniques are used in less than one-fifth of all articles. Although there is room for improvement, there is evidence that Leadership Quarterly is a premier outlet for levels-based leadership research. Given the increasing complexity of organizational science with regard to groups, teams and collectives, Leadership Quarterly has an opportunity to model for organizational research on how to build and test complicated multi-level theories and models.  相似文献   

14.
Hybrid Processing of Stochastic and Subjective Uncertainty Data   总被引:1,自引:0,他引:1  
Uncertainty analyses typically recognize separate stochastic and subjective sources of uncertainty, but do not systematically combine the two, although a large amount of data used in analyses is partly stochastic and partly subjective. We have developed methodology for mathematically combining stochastic and subjective sources of data uncertainty, based on new "hybrid number" approaches. The methodology can be utilized in conjunction with various traditional techniques, such as PRA (probabilistic risk assessment) and risk analysis decision support. Hybrid numbers have been previously examined as a potential method to represent combinations of stochastic and subjective information, but mathematical processing has been impeded by the requirements inherent in the structure of the numbers, e.g., there was no known way to multiply hybrids. In this paper, we will demonstrate methods for calculating with hybrid numbers that avoid the difficulties. By formulating a hybrid number as a probability distribution that is only fuzzily known, or alternatively as a random distribution of fuzzy numbers, methods are demonstrated for the full suite of arithmetic operations, permitting complex mathematical calculations. It will be shown how information about relative subjectivity (the ratio of subjective to stochastic knowledge about a particular datum) can be incorporated. Techniques are also developed for conveying uncertainty information visually, so that the stochastic and subjective components of the uncertainty, as well as the ratio of knowledge about the two, are readily apparent. The techniques demonstrated have the capability to process uncertainty information for independent, uncorrelated data, and for some types of dependent and correlated data. Example applications are suggested, illustrative problems are shown, and graphical results are given.  相似文献   

15.
This paper describes cognitive and behavior analytic approaches to the study of intrinsic and extrinsic reward effects and considers the implications of differences in these approaches for future study and application. A critique of the research design and data analysis techniques used in the Deci-type paradigm is presented. The paper also describes a behavioral model of intrinsic reinforcement for use by Organizational Behavior Managers, and concludes with a proposal for research in this area.  相似文献   

16.
The objective of this paper is to discover which of three forecasting modes used to select parameters for four short-term forecasting techniques minimizes errors. The study also examines whether the amount of historical data used to find parameters contributes to forecasting success. The results show the traditional one-ahead search routine works well in some, but not all, forecasting situations. Also, forecasting errors appear to decline when more historical data are included in the parameter search.  相似文献   

17.
When using data envelopment analysis (DEA) as a benchmarking technique for nursing homes, it is essential to include measures of the quality of care. We survey applications where quality has been incorporated into DEA models and consider the concerns that arise when the results show that quality measures have been effectively ignored. Three modeling techniques are identified that address these concerns. Each of these techniques requires some input from management as to the proper emphasis to be placed on the quality aspect of performance. We report the results of a case study in which we apply these techniques to a DEA model of nursing home performance. We examine in depth not only the resulting efficiency scores, but also the benchmark sets and the weights given to the input and output measures. We find that two of the techniques are effective in insuring that DEA results discriminate between high and low quality performance.  相似文献   

18.
Donald V Mathusz 《Omega》1977,5(5):593-604
Cost-benefit analysis has a considerable literature in which information systems have been patently ignored. This reflects the considerable difficulties of applying the theory to information systems, and the state-of-the art remains relatively as Koopmans described it some 19 years ago (1957). A bar to further development would appear to be the lack of an applicable value-of-information concept. This paper seeks to clarify the issues and provide a robust theoretical and data analysis framework that will cover most situations. The approach here is to separate explicitly the dimensions of cost from those of information benefit, and examine the implications. The Null Information Benefit condition emerges as a special theoretical case, but potentially a most important one in applications. This case together with the Pareto optimum defines a large class of such problems that can be handled by the decision criteria and data analysis techniques tabulated and discussed here. The selection of input data techniques defines the limits of later project justification and may be crucial to the political viability of the projects throughout its life. Finally, the general management vs information systems management relationships are discussed in terms of this situation.  相似文献   

19.
The unemployment rate and the help-wanted index are two frequently used indicators of the state of the labor market. Traditional regression techniques used in the analysis of the relationship between these two indicators and their cyclical behavior with business fluctuations have yielded varying results. Spectral analysis is used in this article to examine the cyclical behavior of the labor market indicators with respect to each other and with respect to a measure of aggregate economic activity in the United States. Very strong relationships are found, lending support to the rationale of using such indicators as labor market proxies. This application of spectral analysis also provides an illustration of the potential fruitfulness of spectral analysis in examining the cyclical relationships between economic time series.  相似文献   

20.
Genetic algorithm (GA) approach is developed for solving the P-model of chance constrained data envelopment analysis (CCDEA) problems, which include the concept of “Satisficing”. Problems here include cases in which inputs and outputs are stochastic, as well as cases in which only the outputs are stochastic. The basic solution technique for the above has so far been deriving “deterministic equivalents”, which is difficult for all stochastic parameters as there are no compact methods available. In the proposed approach, the stochastic objective function and chance constraints are directly used within the genetic process. The feasibility of chance constraints are checked by stochastic simulation techniques. A case of Indian banking sector has been presented to illustrate the above approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号