首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Following a brief account of its principal components, the framework of statistical decision theory is shown to be applicable to selecting schedules by a heuristic procedure for the general J × M job shop problem. Sequential Bayesian strategies and explicit forms of stopping rules are obtained for the search procedure, together with bounds on required sample size.  相似文献   

2.
Gloudemans and Miller [6] provide an interesting and useful application of regression analysis to the problem of residential property assessment. However, there are a number of problems with their model. These problems include: failure to consider the pitfalls of using stepwise regression in exploratory studies; failure to discuss levels of significance and appropriateness of signs of individual regression results; the use of ordinal data in the regression analysis; the use of an obscure and incorrect procedure for inflation adjustment; and, a general lack of a priori reasoning in the development and analysis of the article. These problems are discussed and possible remedies are considered.  相似文献   

3.
Recent advances in statistical estimation theory have resulted in the development of new procedures, called robust methods, that can be used to estimate the coefficients of a regression model. Because such methods take into account the impact of discrepant data points during the initial estimation process, they offer a number of advantages over ordinary least squares and other analytical procedures (such as the analysis of outliers or regression diagnostics). This paper describes the robust method of analysis and illustrates its potential usefulness by applying the technique to two data sets. The first application uses artificial data; the second uses a data set analyzed previously by Tufte [15] and, more recently, by Chatterjee and Wiseman [6].  相似文献   

4.
This paper analyzes the linear regression model y = xβ+ε with a conditional median assumption med (ε| z) = 0, where z is a vector of exogenous instrument random variables. We study inference on the parameter β when y is censored and x is endogenous. We treat the censored model as a model with interval observation on an outcome, thus obtaining an incomplete model with inequality restrictions on conditional median regressions. We analyze the identified features of the model and provide sufficient conditions for point identification of the parameter β. We use a minimum distance estimator to consistently estimate the identified features of the model. We show that under point identification conditions and additional regularity conditions, the estimator based on inequality restrictions is normal and we derive its asymptotic variance. One can use our setup to treat the identification and estimation of endogenous linear median regression models with no censoring. A Monte Carlo analysis illustrates our estimator in the censored and the uncensored case.  相似文献   

5.
This paper considers studentized tests in time series regressions with nonparametrically autocorrelated errors. The studentization is based on robust standard errors with truncation lag M=bT for some constant b∈(0, 1] and sample size T. It is shown that the nonstandard fixed‐b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small‐b limit distribution. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long‐run variance estimator. A plug‐in procedure for implementing this optimal bandwidth is suggested and simulations (not reported here) confirm that the new plug‐in procedure works well in finite samples.  相似文献   

6.
In a recent article, Chatterjee and Greenwood [1] addressed the problem of multicollinearity in polynomial regression models. They noted that there is a high correlation between X and X2; therefore, a second-order polynomial model suffers the consequences of collinearity. Chatterjee and Greenwood [1] suggested a method they believe will overcome the problem. The contention of the present comment is that the suggested method accomplishes nothing and, indeed, has the potential to lead the unwary researcher to the wrong inference and misinterpretation of his results.  相似文献   

7.
In this paper we investigate methods for testing the existence of a cointegration relationship among the components of a nonstationary fractionally integrated (NFI) vector time series. Our framework generalizes previous studies restricted to unit root integrated processes and permits simultaneous analysis of spurious and cointegrated NFI vectors. We propose a modified F‐statistic, based on a particular studentization, which converges weakly under both hypotheses, despite the fact that OLS estimates are only consistent under cointegration. This statistic leads to a Wald‐type test of cointegration when combined with a narrow band GLS‐type estimate. Our semiparametric methodology allows consistent testing of the spurious regression hypothesis against the alternative of fractional cointegration without prior knowledge on the memory of the original series, their short run properties, the cointegrating vector, or the degree of cointegration. This semiparametric aspect of the modelization does not lead to an asymptotic loss of power, permitting the Wald statistic to diverge faster under the alternative of cointegration than when testing for a hypothesized cointegration vector. In our simulations we show that the method has comparable power to customary procedures under the unit root cointegration setup, and maintains good properties in a general framework where other methods may fail. We illustrate our method testing the cointegration hypothesis of nominal GNP and simple‐sum (M1, M2, M3) monetary aggregates.  相似文献   

8.
This article presents an efficient way of dealing with adaptive expectations models—a way that makes use of all the information available in the data. The procedure is based on multiple-input transfer functions (MITFs): by calculating lead and lag cross correlations between innovations associated with the variables in the model, it is possible to determine which periods have the greatest effects on the dependent variable. If information about k periods ahead is required, fitted values for the expectation variables are used to generate k-period-ahead forecasts. These in turn can be used in the estimation of the transfer function equation, which not only contains the usual lagged variables but also allows for incorporation of lead-fitted values for the expectation variables. The MITF identification and estimation procedures used are based on the corner method. The method is contrasted with the Almon distributed-lag approach using a model relating stock market prices to interest rates and expected corporate profits.  相似文献   

9.
This paper studies a shape‐invariant Engel curve system with endogenous total expenditure, in which the shape‐invariant specification involves a common shift parameter for each demographic group in a pooled system of nonparametric Engel curves. We focus on the identification and estimation of both the nonparametric shapes of the Engel curves and the parametric specification of the demographic scaling parameters. The identification condition relates to the bounded completeness and the estimation procedure applies the sieve minimum distance estimation of conditional moment restrictions, allowing for endogeneity. We establish a new root mean squared convergence rate for the nonparametric instrumental variable regression when the endogenous regressor could have unbounded support. Root‐n asymptotic normality and semiparametric efficiency of the parametric components are also given under a set of “low‐level” sufficient conditions. Our empirical application using the U.K. Family Expenditure Survey shows the importance of adjusting for endogeneity in terms of both the nonparametric curvatures and the demographic parameters of systems of Engel curves.  相似文献   

10.
The focus of this paper is the nonparametric estimation of an instrumental regression function ϕ defined by conditional moment restrictions that stem from a structural econometric model E[Yϕ(Z)|W]=0, and involve endogenous variables Y and Z and instruments W. The function ϕ is the solution of an ill‐posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyzes identification and overidentification of this model, and presents asymptotic properties of the estimated nonparametric instrumental regression function.  相似文献   

11.
Box and Jenkins [3] have specified a procedure for the development of a “transfer function model,” a model which expresses the interrelationships between two time series. Involving the iterative repetition of identification, estimation, and checking stages, this procedure is comparable to their procedure for the development of an autoregressive-intergrated-moving average model for a single time series. The transfer function model development procedure has not been widely applied due to the absence of explanations understandable to the non-statistician and reasonably priced computer algorithms. The intent of this paper is to provide the non-statistician with an explanation of the identification stage of the Box-Jenkins transfer function model development procedure. An extensive class of possible models is logically developed. The identification stage is specified step-by-step and is illustrated by three pairs of generated time series analyzed using computer algorithms written by the author. The paper presumes a general knowledge of the Box-Jenkins identification procedure for single series models.  相似文献   

12.
Altough the dual resource-constrained (DRC) system has been studied, the decision rule used to determine when workers are eligible for transfer largely has been ignored. Some earlier studies examined the impact of this rule [5] [12] [15] but did not include labor-transfer times in their models. Gunther [6] incorporated labour-transfer times into his model, but the model involved only one worker and two machines. No previous study has examined decision rules that initiate labor transfers based on labor needs (“pull” rules). Labor transfers always have been initiated based on lack of need (“push” rules). This study examines three “pull” variations of the “When” labor-assignment decision rule. It compares their performances to the performances of two “push” rules and a comparable machine-limited system. A nonparametric statistical test, Jonckheere's S statistic, is used to test for significance of the rankings of the rules: a robust parametric multiple-comparison statistical test, Tukey's B statistic, is used to test the differences. One “pull” and one “push” decision rule provide similar performances and top the rankings consistently. Decision rules for determining when labor should be transferred from one work area to another are valuable aids for managers. This especially is true for the ever-increasing number of managers operating in organizations that recognize the benefits of a cross-trained work force. Recently there has been much interest in cross-training workers, perhaps because one of the mechanisms used in just-in-time systems to handle unbalanced work loads is to have cross-trained workers who can be shifted as demand dictates [8]. If management is to take full advantage of a cross-trained work force, it need to know when to transfer workers.  相似文献   

13.
This study evaluated the effects of 3 training procedures on the correct implementation of a dog walking and enrichment protocol (DWEP). During the shelter’s typical training, volunteers correctly implemented just over half of all DWEP steps (M = 55.2%). Correct implementation of the DWEP procedure improved when participants completed a video-based self-training package (M = 75.3%) but did not reach the preestablished mastery criterion of 85% fidelity with 0 safety errors. Correct implementation improved during coaching (M = 90.6%), which consisted of modeling and positive and corrective feedback, and was maintained during 1-week and 1-month follow-up probes. Criterion performance was demonstrated by 2 of 3 participants at the conclusion of the study.  相似文献   

14.
This paper attempts to isolate and analyze the principal ideas of multiobjective optimization. We do this without casting aspersions on single-objective optimization or championing any one multiobjective technique. We examine each fundamental idea for strengths and weaknesses and subject two—efficiency and utility—to extended consideration. Some general recommendations are made in light of this analysis. Besides the simple advice to retain single-objective optimization as a possible approach, we suggest that three broad classes of multiobjective techniques are very promising in terms of reliably, and believably, achieving a most preferred solution. These are: (1) partial generation of the efficient set, a rubric we use to unify a wide spectrum of both interactive and analytic methods; (2) explicit utility maximization, a much-overlooked approach combining multiattribute decision theory and mathematical programming; and (3) interactive implicit utility maximization, the popular class of methods introduced by Geoffrion, Dyer, and Feinberg [24] and extended significantly by others.  相似文献   

15.
We propose inference procedures for partially identified population features for which the population identification region can be written as a transformation of the Aumann expectation of a properly defined set valued random variable (SVRV). An SVRV is a mapping that associates a set (rather than a real number) with each element of the sample space. Examples of population features in this class include interval‐identified scalar parameters, best linear predictors with interval outcome data, and parameters of semiparametric binary models with interval regressor data. We extend the analogy principle to SVRVs and show that the sample analog estimator of the population identification region is given by a transformation of a Minkowski average of SVRVs. Using the results of the mathematics literature on SVRVs, we show that this estimator converges in probability to the population identification region with respect to the Hausdorff distance. We then show that the Hausdorff distance and the directed Hausdorff distance between the population identification region and the estimator, when properly normalized by , converge in distribution to functions of a Gaussian process whose covariance kernel depends on parameters of the population identification region. We provide consistent bootstrap procedures to approximate these limiting distributions. Using similar arguments as those applied for vector valued random variables, we develop a methodology to test assumptions about the true identification region and its subsets. We show that these results can be used to construct a confidence collection and a directed confidence collection. Those are (respectively) collection of sets that, when specified as a null hypothesis for the true value (a subset of values) of the population identification region, cannot be rejected by our tests.  相似文献   

16.
Today there are more than 80,000 chemicals in commerce and the environment. The potential human health risks are unknown for the vast majority of these chemicals as they lack human health risk assessments, toxicity reference values, and risk screening values. We aim to use computational toxicology and quantitative high‐throughput screening (qHTS) technologies to fill these data gaps, and begin to prioritize these chemicals for additional assessment. In this pilot, we demonstrate how we were able to identify that benzo[k]fluoranthene may induce DNA damage and steatosis using qHTS data and two separate adverse outcome pathways (AOPs). We also demonstrate how bootstrap natural spline‐based meta‐regression can be used to integrate data across multiple assay replicates to generate a concentration–response curve. We used this analysis to calculate an in vitro point of departure of 0.751 μM and risk‐specific in vitro concentrations of 0.29 μM and 0.28 μM for 1:1,000 and 1:10,000 risk, respectively, for DNA damage. Based on the available evidence, and considering that only a single HSD17B4 assay is available, we have low overall confidence in the steatosis hazard identification. This case study suggests that coupling qHTS assays with AOPs and ontologies will facilitate hazard identification. Combining this with quantitative evidence integration methods, such as bootstrap meta‐regression, may allow risk assessors to identify points of departure and risk‐specific internal/in vitro concentrations. These results are sufficient to prioritize the chemicals; however, in the longer term we will need to estimate external doses for risk screening purposes, such as through margin of exposure methods.  相似文献   

17.
The impact of R&D on growth through spillovers has been a major topic of economic research over the last thirty years. A central problem in the literature is that firm performance is affected by two countervailing “spillovers” : a positive effect from technology (knowledge) spillovers and a negative business stealing effects from product market rivals. We develop a general framework incorporating these two types of spillovers and implement this model using measures of a firm's position in technology space and productmarket space. Using panel data on U.S. firms, we show that technology spillovers quantitatively dominate, so that the gross social returns to R&D are at least twice as high as the private returns. We identify the causal effect of R&D spillovers by using changes in federal and state tax incentives for R&D. We also find that smaller firms generate lower social returns to R&D because they operate more in technological niches. Finally, we detail the desirable properties of an ideal spillover measure and how existing approaches, including our new Mahalanobis measure, compare to these criteria.  相似文献   

18.
In an earlier issue of Decision Sciences, Jesse, Mitra, and Cox [1] examined the impact of inflationary conditions on the economic order quantity (EOQ) formula. Specifically, the authors analyzed the effect of inflation on order quantity decisions by means of a model that takes into account both inflationary trends and time discounting (over an infinite time horizon). In their analysis, the authors utilized two models: Current-dollars model and Constant-dollars model. These models were derived, of course, by setting up a total cost equation in the usual manner then finding the optimum order quantity that minimizes the total cost. Jesse, Mitra, and Cox [1] found that EOQ is approximately the same under both conditions; with or without inflation. However, we disagree with the conclusion drawn by [2] and show that EOQ will be different under inflationary conditions, provided that the inflationary conditions are properly accounted for in the formulation of the total cost model.  相似文献   

19.
This paper develops an inferential theory for factor models of large dimensions. The principal components estimator is considered because it is easy to compute and is asymptotically equivalent to the maximum likelihood estimator (if normality is assumed). We derive the rate of convergence and the limiting distributions of the estimated factors, factor loadings, and common components. The theory is developed within the framework of large cross sections (N) and a large time dimension (T), to which classical factor analysis does not apply. We show that the estimated common components are asymptotically normal with a convergence rate equal to the minimum of the square roots of N and T. The estimated factors and their loadings are generally normal, although not always so. The convergence rate of the estimated factors and factor loadings can be faster than that of the estimated common components. These results are obtained under general conditions that allow for correlations and heteroskedasticities in both dimensions. Stronger results are obtained when the idiosyncratic errors are serially uncorrelated and homoskedastic. A necessary and sufficient condition for consistency is derived for large N but fixed T.  相似文献   

20.
Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3‐point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4‐step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta‐analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)—a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4‐step procedure is more likely to reduce overconfidence than the 3‐point procedure (Cohen's d = 0.61, [0.04, 1.18]).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号