首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper develops an explicit relationship between sample size, sampling error, and related costs for the application of multiple regression models in observational studies. Graphs and formulas for determining optimal sample sizes and related factors are provided to facilitate the application of the derived models. These graphs reveal that, in most cases, the imprecision of estimates and minimum total cost are relatively insensitive to increases in sample size beyond n=20. Because of the intrinsic variation of the regression model, even if larger samples are optimal, the relative change in the total cost function is small when the cost of imprecision is a quadratic function. A model-utility approach, however, may impose a lower bound on sample size that requires the sample size be larger than indicated by the estimation or cost-minimization approaches. Graphs are provided to illustrate lower-bound conditions on sample size. Optimal sample size in view of all considerations is obtained by the maximin criterion, the maximum of the minimum sample size for all approaches.  相似文献   

2.
This paper uses two recently developed tests to identify neglected nonlinearity in the relationship between excess returns on four asset classes and several economic and financial variables. Having found some evidence of possible nonlinearity, it was then investigated whether the predictive power of these variables could be enhanced by using neural network models instead of linear regression or GARCH models. Some evidence of nonlinearity in the relationships between the explanatory variables and large stocks and corporate bonds was found. It was also found that the GARCH models are conditionally efficient with respect to neural network models, but the neural network models outperform GARCH models if financial performance measures are used. In resonance with the results reported for the tests for neglected nonlinearity, it was found that the neural network forecasts are conditionally efficient with respect to linear regression models for large stocks and corporate bonds, whereas the evidence is not statistically significant for small stocks and intermediate-term government bonds. This difference persists even when financial performance measures for individual asset classes are used for comparison.  相似文献   

3.
Standard errors of the coefficients of a logistic regression (a binary response model) based on the asymptotic formula are compared to those obtained from the bootstrap through Monte Carlo simulations. The computer intensive bootstrap method, a nonparametric alternative to the asymptotic estimate, overestimates the true value of the standard errors while the asymptotic formula underestimates it. However, for small samples the bootstrap estimates are substantially closer to the true value than their counterpart derived from the asymptotic formula. The methodology is discussed using two illustrative data sets. The first example deals with a logistic model explaining the log-odds of passing the ERA amendment by the 1982 deadline as a function of percent of women legislators and the percent vote for Reagan. In the second example, the probability that an ingot is ready to roll is modelled using heating time and soaking time as explanatory variables. The results agree with those obtained from the simulations. The value of the study to better decision making through accurate statistical inference is discussed.  相似文献   

4.
Four discriminant models were compared in a simulation study: Fisher's linear discriminant function [14], Smith's quadratic discriminant function [34], the logistic discriminant model, and a model based on linear programming [17]. The study was conducted to estimate expected rates of misclassification for these four procedures when observations were sampled from a variety of normal and nonnormal distributions. In contrast to previous research, data were taken from four types of Kurtotic population distributions. The results indicate the four discriminant procedures are robust toward data from many types of distributions. The misclassification rates for both the logistic discriminant model and the formulation based on linear programming consistently decreased as the kurtosis in the data increased. The decreases, however, were of small magnitude. None of these procedures yielded statistically significant lower rates of misclassification under nonnormality. The quadratic discriminant function produced significantly lower error rates when the variances across groups were heterogeneous.  相似文献   

5.
The matched-pairs methodology is becoming increasingly popular as a means of controlling extraneous factors in business research. This paper develops discriminant procedures for matched data and examines the properties of these methods. Data from a recent study by Hunt [14] on the determinants of inventory method choice are used to contrast the performance of the different methods. While all of the methods yield the same set of discriminating variables, those procedures that allow for the dependence among observations within a pair provide greater classificatory power than traditional multivariate techniques.  相似文献   

6.
Although recent work provides insightful theoretical and practical suggestions for improving contextual distance research in international management, the fundamental problem with using distance indicators as explanatory variables remains too little recognized and largely unaddressed. This problem is that cross-national distance metrics partially identify host and/or home countries in one's sample, what I term location-identification. Location-identification can occur irrespective of the number of home/host countries considered and means that a distance indicator partly captures country fixed effects when used as an independent or explanatory variable. As a result, in empirical distance research, genuine distance effects are often confounded with country-specific measurement error in the dependent variable as well as with direct effects due to various home- or host-country features. I present empirical evidence on the pervasiveness of this critical challenge to cross-national distance research and propose a practical and effective solution for addressing it, which is to use “pure” distance indicators that are cleansed from confounding home- and host-location influences.  相似文献   

7.
This article presents an efficient way of dealing with adaptive expectations models—a way that makes use of all the information available in the data. The procedure is based on multiple-input transfer functions (MITFs): by calculating lead and lag cross correlations between innovations associated with the variables in the model, it is possible to determine which periods have the greatest effects on the dependent variable. If information about k periods ahead is required, fitted values for the expectation variables are used to generate k-period-ahead forecasts. These in turn can be used in the estimation of the transfer function equation, which not only contains the usual lagged variables but also allows for incorporation of lead-fitted values for the expectation variables. The MITF identification and estimation procedures used are based on the corner method. The method is contrasted with the Almon distributed-lag approach using a model relating stock market prices to interest rates and expected corporate profits.  相似文献   

8.
In this paper we present empirical evidence on the relationship between board remuneration of a sample of large Spanish companies and a set of explanatory variables such as performance and size of the company. The objective is to provide additional empirical evidence based on the agency theory for the Spanish institutional context, which differs from most ‘Anglo–Saxon’ model studies. We focus on the impact of a company's governance structure on the relationship between pay and performance. Specifically, we consider ownership concentration and firm leverage as key determinants of the board–shareholders relationship. Our results confirm the positive relationship between board remuneration and company performance, which is stronger for book values than for stock market measures. Industry performance also explains the remuneration and provides useful information for evaluating board behaviour. Company size is also related to board remuneration and affects the pay–performance relationship, although it is not relevant when we use an elasticity approach. Finally, the governance structure of companies is relevant when explaining the power of the compensation–performance relationship, and differences between the impact of ownership concentration and firm leverage on this relationship are found.  相似文献   

9.
The implications of constrained dependent and independent variables for model parameters are examined. In the context of linear model systems, it is shown that polyhedral constraints on the dependent variables will hold over the domain of the independent variables when a set of polyhedral constraints is satisfied by the model parameters. This result may be used in parameter estimation, in which case all predicted values of the dependent variables are consistent with constraints on the actual values. Also, the implicit constraints that define the set of parameters for many commonly used linear stochastic models with an error term yield values of the dependent variables consistent with the explicit constraints. Models possessing these properties are termed “logically consistent”.  相似文献   

10.
This study examines whether and to what extent the presence and intensity of knowledge collaboration across different partners affects business model reconfiguration (BMR). We build on the business model (BM) literature and operationalize BMR by introducing the presence and intensity of collaboration and firm size effects as main explanatory factors in affecting the propensity of incremental and radical BMR. We analyze a large sample of UK firms during 2002–2014 to capture the effect of knowledge collaboration and firm size on BMR. Positively incremental forms of BMR will be influenced by the presence and intensity of knowledge collaboration, while radical forms of BMR are affected by the intensity of collaboration with customers and the collaboration with suppliers by large firms. Furthermore, firms of different sizes do not equally benefit from knowledge collaboration with suppliers for both incremental and radical BMR, while they do equally benefit from collaboration with other partner types.  相似文献   

11.
Optimal linear discriminant models maximize percentage accuracy for dichotomous classifications, but are rarely used because a theoretical framework that allows one to make valid statements about the statistical significance of the outcomes of such analyses does not exist. This paper describes an analytic solution for the theoretical distribution of optimal values for univariate optimal linear discriminant analysis, under the assumption that the data are random and continuous. We also present the theoretical distribution for sample sizes up to N= 30. The discovery of a statistical framework for evaluating the performance of optimal discriminant models should greatly increase their use by scientists in all disciplines.  相似文献   

12.
An assumption of multivariate normality for a decision model is validated in this paper. Measurements for the independent variables of a bond rating model were taken from a sample of municipal bonds. Three methods for examining both univariate and multivariate normality (including normal probability plots) are described and applied to the bond data. The results imply, after applying normalizing transformations to four of the variables, that the data reasonably approximate multivariate normality, thereby validating a distributional requirement of the discriminant-analysis-based decision model. The methods described in the paper may also be used by others interested in examining multivariate normality assumptions of decision models.  相似文献   

13.
This paper presents a minimum-cost methodology for determining a statistical sampling plan in substantive audit tests. In this model, the auditor specifies β, the risk of accepting an account balance as correct when it is not, according to audit evidence requirements. Using β as a constraint, the auditor then selects a sampling plan to optimize the trade-off between sampling costs and the costs of follow-up audit procedures. Tables to aid in this process and an illustration are provided.  相似文献   

14.
We present a general model for multi-item production and inventory management problems that include a resource restriction. The decision variables in the model can take on a variety of interpretations, but will typically represent cycle times, production batch sizes, number of production runs, or order quantities for each item. We consider environments where item demand rates are approximately constant and performing an activity such as producing a batch of a product or placing an order results in the consumption of a scarceresource that is shared among the items. Some examples of shared resources include limited machine capacity, a restriction on the amount of money that can be tied up in stock, orlimited storage capacity. We focus on the case where the decision variables must be integer valued or selected from a discrete set of choices, such as when an integer number of production runs is desired for each item, or in order quantity problems where the items come in pack sizes containing more than one unit and, therefore, the order quantities must be an integer multiple of the pack sizes. We develop a heuristic and a branch and bound algorithm for solving the problem. The branch and bound algorithm includes reoptimization procedures and the heuristic to improve its performance. Computational testing indicates that the algorithms are effective for solving the general model.  相似文献   

15.
The application of optimization techniques in digital simulation experiments is frequently complicated by the presence of large experimental error variances. Two of the more widely accepted design strategies for the resolution of this problem include the assignment of common pseudorandom number streams and the assignment of antithetic pseudorandom number streams to the experimental points. When considered separately, however, each of these variance-reduction procedures has rather restrictive limitations. This paper examines the simultaneous use of these two techniques as a variance-reduction strategy in response surface methodology (RSM) analysis of simulation models. A simulation of an inventory system is used to illustrate the application and benefits of this assignment procedure, as well as the basic components of an RSM analysis.  相似文献   

16.
There are numerous variable selection rules in classical discriminant analysis. These rules enable a researcher to distinguish significant variables from nonsignificant ones and thus provide a parsimonious classification model based solely on significant variables. Prominent among such rules are the forward and backward stepwise variable selection criteria employed in statistical software packages such as Statistical Package for the Social Sciences and BMDP Statistical Software. No such criterion currently exists for linear programming (LP) approaches to discriminant analysis. In this paper, a criterion is developed to distinguish significant from nonsignificant variables for use in LP models. This criterion is based on the “jackknife” methodology. Examples are presented to illustrate implementation of the proposed criterion.  相似文献   

17.
This study presents a new robust estimation method that can produce a regression median hyper-plane for any data set. The robust method starts with dual variables obtained by least absolute value estimation. It then utilizes two specially designed goal programming models to obtain regression median estimators that are less sensitive to a small sample size and a skewed error distribution than least absolute value estimators. The superiority of new robust estimators over least absolute value estimators is confirmed by two illustrative data sets and a Monte Carlo simulation study.  相似文献   

18.
This paper develops asymptotic optimality theory for statistical treatment rules in smooth parametric and semiparametric models. Manski (2000, 2002, 2004) and Dehejia (2005) have argued that the problem of choosing treatments to maximize social welfare is distinct from the point estimation and hypothesis testing problems usually considered in the treatment effects literature, and advocate formal analysis of decision procedures that map empirical data into treatment choices. We develop large‐sample approximations to statistical treatment assignment problems using the limits of experiments framework. We then consider some different loss functions and derive treatment assignment rules that are asymptotically optimal under average and minmax risk criteria.  相似文献   

19.
This paper presents a test of the exogeneity of a single explanatory variable in a multivariate model. It does not require the exogeneity of the other regressors or the existence of instrumental variables. The fundamental maintained assumption is that the model must be continuous in the explanatory variable of interest. This test has power when unobservable confounders are discontinuous with respect to the explanatory variable of interest, and it is particularly suitable for applications in which that variable has bunching points. An application of the test to the problem of estimating the effects of maternal smoking in birth weight shows evidence of remaining endogeneity, even after controlling for the most complete covariate specification in the literature.  相似文献   

20.
非参数计量经济联立模型的变窗宽估计理论   总被引:4,自引:0,他引:4  
联立方程模型在经济政策制定、经济结构分析和经济预测方面起重要作用. 文章将非参 数回归模型的局部线性估计方法与传统联立方程模型估计方法相结合,在随机设计(模型中所 有变量为随机变量) 下,提出了非参数计量经济联立模型的局部线性工具变量变窗宽估计并利 用概率论中大数定理和中心极限定理等在内点处研究了它的大样本性质,证明了它的一致性 和渐近正态性,它在内点处的收敛速度达到了非参数函数估计的最优收敛速度  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号