首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We look at a specific but pervasive problem in the use of secondary or published data in which the data are summarized in a histogram format, perhaps with additional mean or median information provided; two published sources yield histogram-type summaries involving the same variable, but the two sources do not group the values of the variable the same way; the researcher wishes to answer a question using information from both data streams; and the original, detailed data underlying the published summary, which could give a better answer to the question, are unavailable. We review relevant aspects of maximum-entropy (ME) estimation, and develop a heuristic for generating ME density estimates from data in histogram form when additional means and medians may be known. Application examples from several business and scientific areas illustrate the heuristic's use. Areas of application include business and social or market research, risk analysis, and individual risk profile analysis. Some instructional or classroom applications are possible as well.  相似文献   

2.
Standard errors of the coefficients of a logistic regression (a binary response model) based on the asymptotic formula are compared to those obtained from the bootstrap through Monte Carlo simulations. The computer intensive bootstrap method, a nonparametric alternative to the asymptotic estimate, overestimates the true value of the standard errors while the asymptotic formula underestimates it. However, for small samples the bootstrap estimates are substantially closer to the true value than their counterpart derived from the asymptotic formula. The methodology is discussed using two illustrative data sets. The first example deals with a logistic model explaining the log-odds of passing the ERA amendment by the 1982 deadline as a function of percent of women legislators and the percent vote for Reagan. In the second example, the probability that an ingot is ready to roll is modelled using heating time and soaking time as explanatory variables. The results agree with those obtained from the simulations. The value of the study to better decision making through accurate statistical inference is discussed.  相似文献   

3.
This paper develops an explicit relationship between sample size, sampling error, and related costs for the application of multiple regression models in observational studies. Graphs and formulas for determining optimal sample sizes and related factors are provided to facilitate the application of the derived models. These graphs reveal that, in most cases, the imprecision of estimates and minimum total cost are relatively insensitive to increases in sample size beyond n=20. Because of the intrinsic variation of the regression model, even if larger samples are optimal, the relative change in the total cost function is small when the cost of imprecision is a quadratic function. A model-utility approach, however, may impose a lower bound on sample size that requires the sample size be larger than indicated by the estimation or cost-minimization approaches. Graphs are provided to illustrate lower-bound conditions on sample size. Optimal sample size in view of all considerations is obtained by the maximin criterion, the maximum of the minimum sample size for all approaches.  相似文献   

4.
Constrained utility maximization underlies much consumer behavior in economics. Opportunities for solving important problems are ever present. However, most potential applications remain potential because existing software packages are not able to estimate the systems of equations necessary to identify the utility function. At least three features often conspire to make these problems intractable: the size of the system of equations that must be estimated, the lack of any theory for imposing zero restrictions on many of the parameters of the nonadditive utility functions, and the necessity to insure negative definiteness consistent with the axioms of consumer behavior. This paper develops an approach and illustrates the stepwise least squares estimator of a group of equations by application to the demand for food items.  相似文献   

5.
A decision regarding development and introduction of a potential new product depends, in part, on the intensity of compeitition anticipated in the marketplace. In the case of a technology-based product such as a personal computer (PC), the number of competing products may be very dynamic and consequently uncertain. We address this problem by modeling growth in the number of new PCs as a stochastic counting process, incorporating product entries and exits. We demonstrate how to use the resulting model to forecast competition five years in advance.  相似文献   

6.
This article provides decision makers with a method of determining the variability and acceptability of a major capital investment. The model used here differs from previous models in that it does not use simulation, nor does it require a normal distribution for the cash flow component. Further, it has no restrictions on whether cash flows are dependent. An example of the technique is included.  相似文献   

7.
Machine learning methods are currently the object of considerable study by the artificial intelligence community. Research on machine learning carries implications for decision making in that it seeks computational methods that mimic input-output behaviors found in classes of decision-making examples. At the same time, research in statistics and econometrics has resulted in the development of qualitative-response models that can be applied to the same kind of problems addressed by machine-learning models—particularly those that involve a classification decision. This paper presents the theoretical structure of a generalized qualitative-response model and compares its performance to two seminal machine-learning models in two problem domains associated with audit decision making. The results suggest that the generalized qualitative-response model may be a useful alternative for certain problem domains.  相似文献   

8.
Industrial robots are increasingly used by many manufacturing firms. The number of robot manufacturers has also increased with many of these firms now offering a wide range of models. A potential user is thus faced with many options in both performance and cost. This paper proposes a decision model for the robot selection problem. The proposed model uses robust regression to identify, based on manufacturers' specifications, the robots that are the better performers for a given cost. Robust regression is used because it identifies and is resistant to the effects of outlying observations, key components in the proposed model. The robots selected by the model become candidates for testing to verify manufacturers' specifications. The model is tested on a real data set and an example is presented.  相似文献   

9.
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the “true” value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved “true” variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the “true,” unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach.  相似文献   

10.
A preference-order recursion algorithm for obtaining a relevant subset of pure, admissible (non-dominated, efficient) decision functions which converges towards an optimal solution in statistical decision problems is proposed. The procedure permits a decision maker to interactively express strong binary preferences for partial decision functions at each stage of the recursion, from which an imprecise probability and/or utility function is imputed and used as one of several pruning mechanisms to obtain a reduced relevant subset of admissible decision functions or to converge on an optimal one. The computational and measurement burden is thereby mitigated significantly, for example, by not requiring explicit or full probability and utility information from the decision maker. The algorithm is applicable to both linear and nonlinear utility functions. The results of behavioral and computational experimentation show that the approach is viable, efficient, and robust.  相似文献   

11.
A number of recent studies have compared the performance of neural networks (NNs) to a variety of statistical techniques for the classification problem in discriminant analysis. The empirical results of these comparative studies indicate that while NNs often outperform the more traditional statistical approaches to classification, this is not always the case. Thus, decision makers interested in solving classification problems are left in a quandary as to what tool to use on a particular data set. We present a new approach to solving classification problems by combining the predictions of a well-known statistical tool with those of an NN to create composite predictions that are more accurate than either of the individual techniques used in isolation.  相似文献   

12.
A methodology for determining a von Neumann-Morgenstern utility function is outlined based on the axioms crucial to such a function. Reconciliation of inconsistent judgments is facilitated using the theory of reciprocal matrices. Numerical measures of the collective divergence of a set of judgments from perfect consistency or coherency are provided.  相似文献   

13.
A substantial body of empirical accounting, finance, management, and marketing research utilizes single equation models with discrete dependent variables. Generally, the interpretation of the coefficients of the exogenous variables is limited to the sign and relative magnitude. This paper presents three methods of interpreting the coefficients in these models. The first method interprets the coefficients as marginal probabilities and the second method interprets the coefficients as elasticities of probability. The third method utilizes sensitivity analysis and examines the effect of hypothetical changes in exogenous variables on the probability of choice. This paper applies these methods to a published research study.  相似文献   

14.
Janssen and Daniel analyzed the choice between a one- or a two-point conversion for a particular game situation in college football. Their decision criteria was maximum expected utility based on a von Neumann-Morgenstern utility function defined over the games outcomes. An alternative approach based on a stochastic dominance criterion is presented that does not rely on knowledge of the relative importance of tying vs. winning; rather, it relies on a notion of consistency in the sequential problem.  相似文献   

15.
Building models of expert decision-making behavior from examples of experts’ decisions continues to receive considerable research attention. In the 1960's and 70's, linear models derived by statistical methods were studied extensively. More recently, rule-based expert systems derived by induction algorithms have been the focus of attention. Few studies compare the two approaches. This paper reports on a study that compared linear models derived by logistic regression with rule-based systems produced by two induction algorithms—ID3 and the genetic algorithm. The techniques performed comparably in modeling the experts at one task, graduate admissions, but differed significantly at a second task, bidder selection.  相似文献   

16.
Fred Glover 《决策科学》1990,21(4):771-785
Discriminant analysis is an important tool for practical problem solving. Classical statistical applications have been joined recently by applications in the fields of management science and artificial intelligence. In a departure from the methodology of statistics, a series of proposals have appeared for capturing the goals of discriminant analysis in a collection of linear programming formulations. The evolution of these formulations has brought advances that have removed a number of initial shortcomings and deepened our understanding of how these models differ in essential ways from other familiar classes of LP formulations. We will demonstrate, however, that the full power of the LP discriminant analysis models has not been achieved, due to a previously undetected distortion that inhibits the quality of solutions generated. The purpose of this paper is to show how to eliminate this distortion and thereby increase the scope and flexibility of these models. We additionally show how these outcomes open the door to special model manipulations and simplifications, including the use of a successive goal method for establishing a series of conditional objectives to achieve improved discrimination.  相似文献   

17.
An empirical taxonomy of industrial customers' information source use is developed based on a survey of 636 industrial customers across a wide range of different purchase situations. The taxonomy reveals five distinct information source mixes. Each mix consists of the combination of individual information sources used in a purchase situation. The five information source mixes are related to select underlying characteristics of purchase situations. The results indicate that the multivariate dimensions of purchase involvement, purchase complexity, and multiple influence are all significantly related to customers' choice of an information source mix. Implications of the taxonomy for marketing management and research are discussed.  相似文献   

18.
Many industrial products have three phases in their product lives: infant-mortality, normal, and wear-out phases. In the infant-mortality phase, the failure rate is high, but decreasing; in the normal phase, the failure rate remains constant; and in the wear-out phase, the failure rate is increasing. A burn-in procedure may be used to reduce early failures before shipping a product to consumers. A cost model is formulated to find the optimal burn-in time, which minimizes the expected sum of manufacturing cost, burn-in cost, and warranty cost incurred by failed items found during the warranty period. A mixture of Weibull hyperexponential distribution with shape parameter less than one and exponential distribution is used to describe the infant-mortality and the normal phases of the product life. The product under consideration can be either repairable or non-repairable. When the change-point of the product life distribution is unknown, it is estimated by using the maximum-likelihood estimation method. The effects of sample size on estimation error and the performance of the model are studied, and a sensitivity analysis is performed to study the effects of several parameters of the W-E distribution and costs on the optimal burn-in time.  相似文献   

19.
This paper demonstrates the feasibility of applying nonlinear programming methods to solve the classification problem in discriminant analysis. The application represents a useful extension of previously proposed linear programming-based solutions for discriminant analysis. The analysis of data obtained by conducting a Monte Carlo simulation experiment shows that these new procedures are promising. Future research that should promote application of the proposed methods for solving classification problems in a business decision-making environment is discussed.  相似文献   

20.
The relative error in the usual estimator of a brand's market share is reformulated in terms of marketing parameters. Such error is shown to be influenced in an important way by market penetration, as well as by variation in brand and product category volume. Of particular interest is the result that the relative error does not depend on the actual share level. Using data from a marketing research firm that supplies share estimates to the health products industry, we find that the relative error may be substantial even when a large sample is available. An upper bound on this relative error is obtained using marketing parameters that can frequently be measured using industry data and a company's internal records, thus reducing the level of judgmental input required in the planning of sample surveys.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号