首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Kevin M. Crofton 《Risk analysis》2012,32(10):1784-1797
Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose‐dependent interaction. However, the corresponding likelihood‐ratio‐based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds‐optimal second‐stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds‐optimal second‐stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice.  相似文献   

2.
Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3‐point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4‐step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta‐analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)—a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4‐step procedure is more likely to reduce overconfidence than the 3‐point procedure (Cohen's d = 0.61, [0.04, 1.18]).  相似文献   

3.
Manufacturing capability has often been viewed to be a major obstacle in achieving higher levels of customization. Companies follow various strategies ranging from equipment selection to order process management to cope with the challenges of increased customization. We examined how the customization process affects product performance and conformance in the context of a design‐to‐order (DTO) manufacturer of industrial components. Our competing risk hazard function model incorporates two thresholds, which we define as mismatch and manufacturing thresholds. Product performance was adversely affected when the degree of customization exceeded the mismatch threshold. Likewise, product conformance eroded when the degree of customization exceeded the manufacturing threshold. Relative sizes of the two thresholds have management implications for the subsequent investments to improve customization capabilities. Our research developed a rigorous framework to address two key questions relevant to the implementation of product customization: (1) what degrees of customization to offer, and (2) how to customize the product design process.  相似文献   

4.
This paper applies some general concepts in decision theory to a linear panel data model. A simple version of the model is an autoregression with a separate intercept for each unit in the cross section, with errors that are independent and identically distributed with a normal distribution. There is a parameter of interest γ and a nuisance parameter τ, a N×K matrix, where N is the cross‐section sample size. The focus is on dealing with the incidental parameters problem created by a potentially high‐dimension nuisance parameter. We adopt a “fixed‐effects” approach that seeks to protect against any sequence of incidental parameters. We transform τ to (δ, ρ, ω), where δ is a J×K matrix of coefficients from the least‐squares projection of τ on a N×J matrix x of strictly exogenous variables, ρ is a K×K symmetric, positive semidefinite matrix obtained from the residual sums of squares and cross‐products in the projection of τ on x, and ω is a (NJ) ×K matrix whose columns are orthogonal and have unit length. The model is invariant under the actions of a group on the sample space and the parameter space, and we find a maximal invariant statistic. The distribution of the maximal invariant statistic does not depend upon ω. There is a unique invariant distribution for ω. We use this invariant distribution as a prior distribution to obtain an integrated likelihood function. It depends upon the observation only through the maximal invariant statistic. We use the maximal invariant statistic to construct a marginal likelihood function, so we can eliminate ω by integration with respect to the invariant prior distribution or by working with the marginal likelihood function. The two approaches coincide. Decision rules based on the invariant distribution for ω have a minimax property. Given a loss function that does not depend upon ω and given a prior distribution for (γ, δ, ρ), we show how to minimize the average—with respect to the prior distribution for (γ, δ, ρ)—of the maximum risk, where the maximum is with respect to ω. There is a family of prior distributions for (δ, ρ) that leads to a simple closed form for the integrated likelihood function. This integrated likelihood function coincides with the likelihood function for a normal, correlated random‐effects model. Under random sampling, the corresponding quasi maximum likelihood estimator is consistent for γ as N→∞, with a standard limiting distribution. The limit results do not require normality or homoskedasticity (conditional on x) assumptions.  相似文献   

5.
The conventional heteroskedasticity‐robust (HR) variance matrix estimator for cross‐sectional regression (with or without a degrees‐of‐freedom adjustment), applied to the fixed‐effects estimator for panel data with serially uncorrelated errors, is inconsistent if the number of time periods T is fixed (and greater than 2) as the number of entities n increases. We provide a bias‐adjusted HR estimator that is ‐consistent under any sequences (n, T) in which n and/or T increase to ∞. This estimator can be extended to handle serial correlation of fixed order.  相似文献   

6.
Descending mechanisms for procurement (or, ascending mechanisms for selling) have been well‐recognized for their simplicity from the viewpoint of bidders—they require less bidder sophistication as compared to sealed‐bid mechanisms. In this study, we consider procurement under each of two types of constraints: (1) Individual/Group Capacities: limitations on the amounts that can be sourced from individual and/or subsets of suppliers, and (2) Business Rules: lower and upper bounds on the number of suppliers to source from, and on the amount that can be sourced from any single supplier. We analyze two procurement problems, one that incorporates individual/group capacities and another that incorporates business rules. In each problem, we consider a buyer who wants to procure a fixed quantity of a product from a set of suppliers, where each supplier is endowed with a privately known constant marginal cost. The buyer's objective is to minimize her total expected procurement cost. For both problems, we present descending auction mechanisms that are optimal mechanisms. We then show that these two problems belong to a larger class of mechanism design problems with constraints specified by polymatroids, for which we prove that optimal mechanisms can be implemented as descending mechanisms.  相似文献   

7.
8.
This paper considers studentized tests in time series regressions with nonparametrically autocorrelated errors. The studentization is based on robust standard errors with truncation lag M=bT for some constant b∈(0, 1] and sample size T. It is shown that the nonstandard fixed‐b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small‐b limit distribution. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long‐run variance estimator. A plug‐in procedure for implementing this optimal bandwidth is suggested and simulations (not reported here) confirm that the new plug‐in procedure works well in finite samples.  相似文献   

9.
It is argued that construct validity is content validity and that prior content validity is necessary for predictive validity. Too many measures of abstract (multicomponential) constructs in management and organizational research exhibit poor content validity or else lose much of their content validity following unnecessary statistical ‘purification’ to select items. The measures produce misleading empirical results and can lead to erroneous acceptance or rejection of hypotheses or entire theories. Both problems – inadequate content validation and unnecessary statistical purification – are illustrated here for two new measures of the construct of ‘export coordination’ ( Diamantopoulos and Siguaw , British Journal of Management, 2006, 17 (4), pp. 263–282). Corrections according to Rossiter's C‐OAR‐SE procedure for scale development are specified.  相似文献   

10.
Abstract

In resource-based models of job design, job resources, such as control and social support, are thought to help workers to solve problems. Few studies have examined this assumption. We analyzed 80 qualitative diary entries (N=29) and interviews (N=37) concerned with the in-role requirements of medical technology designers in the UK for problem solving. Four themes linked to using the resources of job control and social support for problem solving emerged. These were: (1) eliciting social support to solve problems; (2) exercising job control to solve problems; (3) co-dependence between eliciting social support and exercising job control to solve problems; and (4) using job resources to regulate affect. The results were largely supportive of the assumptions underpinning resource-based models of job design. They also indicated that the explanatory power of resource-based models of job design may be enhanced by considering interdependencies between various factors: how different job resources are used, workers' motivation to use resources, workers' knowledge of how to use resources and the use of resources from across organizational boundaries. The study provides qualitative support for the assumption that social support and job control are used to cope with demands.  相似文献   

11.
In a technology project, project integration represents the pooling together of complete, interdependent task modules to form a physical product or software delivering a desired functionality. This study develops and tests a conceptual framework that examines the interrelationships between the elements of work design, project integration challenges, and project performance. We identify two distinct elements of work design in technology projects: (i) the type of project organization based on whether a technology project spans a firm boundary (Domestic‐Outsourcing) or a country boundary (Offshore‐Insourcing) or both boundaries (Offshore‐Outsourcing) or no boundaries (Domestic‐Insourcing), and (ii) the joint coordination practices among key stakeholders in a technology project—namely, Onsite Ratio and Joint‐Task Ownership. Next, we measure the effectiveness of project integration using integration glitches that capture the incompatibility among interdependent task modules during project integration. Based on analysis of data from 830 technology projects, the results highlight the differential effects of distributed project organizations on integration glitches. Specifically, we find that project organizations that span both firm and country boundaries (Offshore‐Outsourcing) experience significantly higher levels of integration glitches compared to domestic project organizations (Domestic‐Outsourcing and Domestic‐Insourcing). The results further indicate that the relationship between project organization type and integration glitches is moderated by the extent of joint coordination practices in a project. That is, managers can actively lower integration glitches by increasing the levels of onsite ratio and by promoting higher levels of joint‐task ownership, particularly in project organization types that span both firm and country boundaries (Offshore‐Outsourcing). Finally, the results demonstrate the practical significance of studying integration glitches by highlighting its significant negative effect on project performance.  相似文献   

12.
This paper reports the results of an experimental comparison of three linear programming approaches and the Fisher procedure for the discriminant problem. The linear programming approaches include two formulations proposed by Freed and Glover and a newly proposed mixed-integer, linear goal programming formulation. Ten test problems were generated for each of the 36 cells in the three-factor, full-factorial experimental design. Each test problem consisted of a 30-case estimation sample and a 1,000-case holdout sample. Experimental results indicate that each of the four approaches was statistically preferable in certain cells of the experimental design. Sample-based rules are suggested for selecting an approach based on Hotelling's T2 and Box's M statistics. Subject Areas: Statistical Techniques, Linear Statistical Models, and Linear Programming.  相似文献   

13.
A combinatorial optimization problem, called the Bandpass Problem, is introduced. Given a rectangular matrix A of binary elements {0,1} and a positive integer B called the Bandpass Number, a set of B consecutive non-zero elements in any column is called a Bandpass. No two bandpasses in the same column can have common rows. The Bandpass problem consists of finding an optimal permutation of rows of the matrix, which produces the maximum total number of bandpasses having the same given bandpass number in all columns. This combinatorial problem arises in considering the optimal packing of information flows on different wavelengths into groups to obtain the highest available cost reduction in design and operating the optical communication networks using wavelength division multiplexing technology. Integer programming models of two versions of the bandpass problems are developed. For a matrix A with three or more columns the Bandpass problem is proved to be NP-hard. For matrices with two or one column a polynomial algorithm solving the problem to optimality is presented. For the general case fast performing heuristic polynomial algorithms are presented, which provide near optimal solutions, acceptable for applications. High quality of the generated heuristic solutions has been confirmed in the extensive computational experiments. As an NP-hard combinatorial optimization problem with important applications the Bandpass problem offers a challenge for researchers to develop efficient computational solution methods. To encourage the further research a Library of Bandpass Problems has been developed. The Library is open to public and consists of 90 problems of different sizes (numbers of rows, columns and density of non-zero elements of matrix A and bandpass number B), half of them with known optimal solutions and the second half, without.  相似文献   

14.
This paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS acknowledges the limitations of the data, such that uninformative data yield a MCS with many models, whereas informative data yield a MCS with only a few models. The MCS procedure does not assume that a particular model is the true model; in fact, the MCS procedure can be used to compare more general objects, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine the MCS of the best regression in terms of in‐sample likelihood criteria.  相似文献   

15.
Rossiter (2008) attempts to show that traditional measure development procedures are flawed. He illustrates his reasoning using measures of the export coordination construct (Diamantopoulos and Siguaw, British Journal of Management, 17 (2006), pp. 263–282), and ‘corrects’ these measures using the C‐OAR‐SE procedure for scale development. We explain the errors that Rossiter (2008) makes in his application of the C‐OAR‐SE procedure, and in the assumptions inherent in the C‐OAR‐SE procedure. We demonstrate that the ‘corrected’ measure that Rossiter (2008) develops using the C‐OAR‐SE procedure lacks validity. We conclude that the C‐OAR‐SE procedure needs more work if it is become a useful tool for researchers.  相似文献   

16.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

17.
18.
We study a two‐product inventory model that allows substitution. Both products can be used to supply demand over a selling season of N periods, with a one‐time replenishment opportunity at the beginning of the season. A substitution could be offered even when the demanded product is available. The substitution rule is flexible in the sense that the seller can choose whether or not to offer substitution and at what price or discount level, and the customer may or may not accept the offer, with the acceptance probability being a decreasing function of the substitution price. The decisions are the replenishment quantities at the beginning of the season, and the dynamic substitution‐pricing policy in each period of the season. Using a stochastic dynamic programming approach, we present a complete solution to the problem. Furthermore, we show that the objective function is concave and submodular in the inventory levels—structural properties that facilitate the solution procedure and help identify threshold policies for the optimal substitution/pricing decisions. Finally, with a state transformation, we also show that the objective function is ‐concave, which allows us to derive similar structural properties of the optimal policy for multiple‐season problems.  相似文献   

19.
Parametric modelling run by the explicitly defined algorithms generating synchronically auditable dynamic forms and patterns, has become a prominent method especially in architecture. Though the use of parametric models has got wider in urban design, the critical reflection on the actual and possible application of the method in urbanism has fallen limited so far. The paper tends to relate parametric design with the contemporary understanding of urbanism with regards to the idea of design control in the the context of complexity. From this perspective, the actual performance of the model application in urban context is discussed with the renowned project of Kartal-Pendik Masterplan (Zaha Hadid Architects) in Istanbul, Turkey.  相似文献   

20.
The well‐known deterministic resource‐constrained project scheduling problem involves the determination of a predictive schedule (baseline schedule or pre‐schedule) of the project activities that satisfies the finish–start precedence relations and the renewable resource constraints under the objective of minimizing the project duration. This baseline schedule serves as a baseline for the execution of the project. During execution, however, the project can be subject to several types of disruptions that may disturb the baseline schedule. Management must then rely on a reactive scheduling procedure for revising or reoptimizing the baseline schedule. The objective of our research is to develop procedures for allocating resources to the activities of a given baseline schedule in order to maximize its stability in the presence of activity duration variability. We propose three integer programming–based heuristics and one constructive procedure for resource allocation. We derive lower bounds for schedule stability and report on computational results obtained on a set of benchmark problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号