首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
Recent headlines and scientific articles projecting significant human health benefits from changes in exposures too often depend on unvalidated subjective expert judgments and modeling assumptions, especially about the causal interpretation of statistical associations. Some of these assessments are demonstrably biased toward false positives and inflated effects estimates. More objective, data‐driven methods of causal analysis are available to risk analysts. These can help to reduce bias and increase the credibility and realism of health effects risk assessments and causal claims. For example, quasi‐experimental designs and analysis allow alternative (noncausal) explanations for associations to be tested, and refuted if appropriate. Panel data studies examine empirical relations between changes in hypothesized causes and effects. Intervention and change‐point analyses identify effects (e.g., significant changes in health effects time series) and estimate their sizes. Granger causality tests, conditional independence tests, and counterfactual causality models test whether a hypothesized cause helps to predict its presumed effects, and quantify exposure‐specific contributions to response rates in differently exposed groups, even in the presence of confounders. Causal graph models let causal mechanistic hypotheses be tested and refined using biomarker data. These methods can potentially revolutionize the study of exposure‐induced health effects, helping to overcome pervasive false‐positive biases and move the health risk assessment scientific community toward more accurate assessments of the impacts of exposures and interventions on public health.  相似文献   

2.
We test whether young adults who co‐reside with their parents derive influence over household‐level expenditure by earning income. We propose a new variant of the Engel curve consistent with the Quadratic Almost Ideal Demand System, which allows a simple test of income pooling. Our tests suggest that young adults and parents mostly pool their income — pooling is not rejected for 8 out of 12 expenditure categories. We are more likely to reject income pooling between young adults and their parents in those expenditure categories where the model fit is highest, so our results may be interpreted as an upper bound on income pooling. We also apply our tests to income pooling between husbands and wives and find that pooling holds for 9 out of 12 expenditure categories. We find the opposite relationship with fit — expenditure categories where fit is poor are those where we are most likely to reject income pooling.  相似文献   

3.
We introduce the class of conditional linear combination tests, which reject null hypotheses concerning model parameters when a data‐dependent convex combination of two identification‐robust statistics is large. These tests control size under weak identification and have a number of optimality properties in a conditional problem. We show that the conditional likelihood ratio test of Moreira, 2003 is a conditional linear combination test in models with one endogenous regressor, and that the class of conditional linear combination tests is equivalent to a class of quasi‐conditional likelihood ratio tests. We suggest using minimax regret conditional linear combination tests and propose a computationally tractable class of tests that plug in an estimator for a nuisance parameter. These plug‐in tests perform well in simulation and have optimal power in many strongly identified models, thus allowing powerful identification‐robust inference in a wide range of linear and nonlinear models without sacrificing efficiency if identification is strong.  相似文献   

4.
The design of distributed computer systems (DCSs) requires compromise among several conflicting objectives. For instance, high system availability conflicts with low cost which in turn conflicts with quick response time. This paper presents an approach, based on multi-criteria decision-making techniques, to arrive at a good design in this multiobjective environment. An interactive procedure is developed to support the decision making of system designers. Starting from an initial solution, the procedure presents a sequence of non-dominated vectors to designers, allowing them to explore systematically alternative possibilities on the path to a final design. The model user has control over trade-offs among different design objectives. This paper focuses on the details of the mathematical model used to provide decision support. Accordingly, a formulation of DCS design as a multicriteria decision problem is developed. The exchange search heuristic used to generate nondominated solutions also is presented. We argue that multicriteria models provide a more realistic formulation of the DCS design problem than the single-criterion models used widely in the literature. While obtaining a clear definition of design objectives (single or multiple) is an important activity, by explicitly acknowledging the trade-offs among multiple objectives in the design process, our methodology is more likely to produce a better overall design than methods addressing a single criterion in isolation.  相似文献   

5.
This paper develops a theory of randomization tests under an approximate symmetry assumption. Randomization tests provide a general means of constructing tests that control size in finite samples whenever the distribution of the observed data exhibits symmetry under the null hypothesis. Here, by exhibits symmetry we mean that the distribution remains invariant under a group of transformations. In this paper, we provide conditions under which the same construction can be used to construct tests that asymptotically control the probability of a false rejection whenever the distribution of the observed data exhibits approximate symmetry in the sense that the limiting distribution of a function of the data exhibits symmetry under the null hypothesis. An important application of this idea is in settings where the data may be grouped into a fixed number of “clusters” with a large number of observations within each cluster. In such settings, we show that the distribution of the observed data satisfies our approximate symmetry requirement under weak assumptions. In particular, our results allow for the clusters to be heterogeneous and also have dependence not only within each cluster, but also across clusters. This approach enjoys several advantages over other approaches in these settings.  相似文献   

6.
The difficulties in properly anticipating key economic variables may encourage decision makers to rely on experts' forecasts. Professional forecasters, however, may not be reliable and so their forecasts must be empirically tested. This may induce experts to forecast strategically in order to pass the test. A test can be ignorantly passed if a false expert, with no knowledge of the data‐generating process, can pass the test. Many tests that are unlikely to reject correct forecasts can be ignorantly passed. Tests that cannot be ignorantly passed do exist, but these tests must make use of predictions contingent on data not yet observed at the time the forecasts are rejected. Such tests cannot be run if forecasters report only the probability of the next period's events on the basis of the actually observed data. This result shows that it is difficult to dismiss false, but strategic, experts who know how theories are tested. This result also shows an important role that can be played by predictions contingent on data not yet observed.  相似文献   

7.
The paper illustrates the desirability and feasibility of the computer simulation technique in socio-psychological research. A computer simulation model of a five-man industrial work group is constructed. After the model has successfully passed a two stage validation procedure, an experimentation phase is conducted. In a 2 times 3 replicated factorial experiment, five years of simulated weekly data is used to test several hypotheses which relate the independent variables–supervisory style and worker interpersonal orientation–to productivity, worker job satisfaction and group cohesiveness. The hypotheses were derived from the findings of prior short-lived laboratory and field research. The study indicates that the computer simulation approach is a valuable adjunct to the classical organizational research techniques.  相似文献   

8.
Many chemicals interfere with the natural reproductive processes in mammals. The chemicals may prevent the fertilization of an egg or keep a zygote from implanting in the uterine wall. For this reason, toxicology studies with pre-implantation exposure often exhibit a dose-related trend in the number of observed implantations per litter. Standard methods for analyzing developmental toxicology studies are conditioned on the number of implantations in the litter and therefore cannot estimate this effect of the chemical on the reproductive process. This article presents a joint modeling approach to estimating risk in toxicology studies with pre-implantation exposure. In the joint modeling approach, both the number of implanted fetuses and the outcome of each implanted fetus is modeled. Using this approach we show how to estimate the overall risk of a chemical that incorporates the risk of lost implantation due to pre-implantation exposure. Our approach has several distinct advantages over previous methods: (1) it is based on fitting a model for the observed data and, therefore, diagnostics of model fit and selection apply; (2) all assumptions are explicitly stated; and (3) it can be fit using standard software packages We illustrate our approach by analyzing a dominant lethal assay data set (Luning et al., 1966, Mutation Research, 3, 444-451) and compare ourresults with those of Rai and Van Ryzin (1985, Biometrics, 41,1-9) and Dunson (1998, Biometrics, 54, 558-569). In a simulation study, our approach has smaller bias and variance than the multiple imputation procedure of Dunson.  相似文献   

9.
We consider a patient admission problem to a hospital with multiple resource constraints (e.g., OR and beds) and a stochastic evolution of patient care requirements across multiple resources. There is a small but significant proportion of emergency patients who arrive randomly and have to be accepted at the hospital. However, the hospital needs to decide whether to accept, postpone, or even reject the admission from a random stream of non‐emergency elective patients. We formulate the control process as a Markov decision process to maximize expected contribution net of overbooking costs, develop bounds using approximate dynamic programming, and use them to construct heuristics. We test our methods on data from the Ronald Reagan UCLA Medical Center and find that our intuitive newsvendor‐based heuristic performs well across all scenarios.  相似文献   

10.
Several assumptions, defined and undefined, are used in the toxicity assessment of chemical mixtures. In scientific practice mixture components in the low-dose region, particularly subthreshold doses, are often assumed to behave additively (i.e., zero interaction) based on heuristic arguments. This assumption has important implications in the practice of risk assessment, but has not been experimentally tested. We have developed methodology to test for additivity in the sense of Berenbaum (Advances in Cancer Research, 1981), based on the statistical equivalence testing literature where the null hypothesis of interaction is rejected for the alternative hypothesis of additivity when data support the claim. The implication of this approach is that conclusions of additivity are made with a false positive rate controlled by the experimenter. The claim of additivity is based on prespecified additivity margins, which are chosen using expert biological judgment such that small deviations from additivity, which are not considered to be biologically important, are not statistically significant. This approach is in contrast to the usual hypothesis-testing framework that assumes additivity in the null hypothesis and rejects when there is significant evidence of interaction. In this scenario, failure to reject may be due to lack of statistical power making the claim of additivity problematic. The proposed method is illustrated in a mixture of five organophosphorus pesticides that were experimentally evaluated alone and at relevant mixing ratios. Motor activity was assessed in adult male rats following acute exposure. Four low-dose mixture groups were evaluated. Evidence of additivity is found in three of the four low-dose mixture groups. The proposed method tests for additivity of the whole mixture and does not take into account subset interactions (e.g., synergistic, antagonistic) that may have occurred and cancelled each other out.  相似文献   

11.
我国上市公司CFO薪酬与盈余质量的相关性研究   总被引:3,自引:0,他引:3  
本文研究了我国上市公司CFO薪酬与盈余质量的相关性.研究发现,随着我国上市公司治理机制的不断完善,上市公司逐步建立起了以盈余为业绩指标的CFO薪酬激励机制.通过文章逐层递进的研究,我们发现我国上市公司CFO薪酬激励契约显著地区别反映了盈余中的非经常性损益和经常性损益,但是却未能有效地区别反映经常性损益中的应计项目和经营性现金流,存在类似"功能锁定"的现象.进一步细分研究样本后,我们发现由于盈余管理上市公司CFO薪酬激励契约对非经常性损益和经常性损益的不合理权重赋值,扭亏上市公司的CFO薪酬激励契约反而刺激了CFO进行盈余管理.根据研究我们认为,解决CFO薪酬激励契约对应计项目和经营性现金流的"功能锁定"现象,改进盈余管理上市公司CFO薪酬激励契约成为目前我国上市公司完善CFO薪酬激励机制的两个重要任务.  相似文献   

12.
A method is presented for measuring the importance of a mean-difference hypothesis and for a set of such hypotheses. The method is based on the population prediction uncertainty associated with the models for the alternative and null hypotheses. Previous methods for ANOVA designs are special cases of the method presented here. The method is demonstrated showing its flexibility for the analysis of mean-difference hypotheses. The method encourages tests of meaningful research hypotheses rather than fitting the hypotheses to the method.  相似文献   

13.
This paper presents a real application of a multicriteria decision aid (MCDA) approach to portfolio selection based on preference disaggregation, using ordinal regression and linear programming (UTADIS method; UTilités Additives DIScriminantes). The additive utility functions that are derived through this approach have the extrapolation ability that any new alternative (share) can be easily evaluated and classified into one of several user-predefined groups. The procedure is illustrated with a case study of 98 stocks from the Athens stock exchange, using 15 criteria. The results are encouraging, indicating that the proposed methodology could be used as a tool for the analysis of the portfolio managers' preferences and choices. Furthermore, the comparison with multiple discriminant analysis (either using a stepwise procedure or not) illustrates the superiority of the proposed methodology over a well-known multivariate statistical technique that has been extensively used to study financial decision-making problems.  相似文献   

14.
The determination of reasonable compensation is one of the most frequently contested issues between the taxpayer and the IRS. The major purpose of this study is to develop a multiple regression model to predict accurately the amount of compensation allowed by the Tax Court as a percent of the amount in dispute between the taxpayer and the IRS. In general, the taxpayer receives favorable treatment in Tax Court when contesting unreasonable compensation payments. The multiple regression model, developed using a stepwise procedure, is a good predictor of the compensation allowed by the court. The overall results have important implications for developing taxpayers' appeal strategies.  相似文献   

15.
Abstract

Tests and assessments are used in organizations for a wide range of purposes, and it is the uses of tests, not the tests themselves, that are validated. As a result, the critical question is often not “Is this test valid?”, but rather “Valid for what?”. Tests normally have multiple uses and purposes in organizations, which may be defined and understood differently by different stakeholders, and tests might have as many validities as they have uses. The strengths and weaknesses of existing validation strategies are examined and compared in the light of the ways tests are used in organizations. Content validation often seems unconnected with the ways tests are used and interpreted in organizations, and is not always useful a strategy for validating tests. Criterion‐oriented validation methods (including sophisiticated variants, such as the validity generalization model) are often deficient because they apply a univariate strategy for evaluating what is clearly a multivariate phenomenon—i.e., use of test scores to make high‐stakes decisions in organization. Multivariate models of validation provide an opportunity to integrate qualitatively different criteria (e.g., efficiency and equity) in evaluating the validity of a test as it is used in an organization.  相似文献   

16.
Checking parameter stability of econometric models is a long‐standing problem. Almost all existing structural change tests in econometrics are designed to detect abrupt breaks. Little attention has been paid to smooth structural changes, which may be more realistic in economics. We propose a consistent test for smooth structural changes as well as abrupt structural breaks with known or unknown change points. The idea is to estimate smooth time‐varying parameters by local smoothing and compare the fitted values of the restricted constant parameter model and the unrestricted time‐varying parameter model. The test is asymptotically pivotal and does not require prior information about the alternative. A simulation study highlights the merits of the proposed test relative to a variety of popular tests for structural changes. In an application, we strongly reject the stability of univariate and multivariate stock return prediction models in the postwar and post‐oil‐shocks periods.  相似文献   

17.
The aim of this paper is to develop revealed preference tests for Cournot equilibrium. The tests are akin to the widely used revealed preference tests for consumption, but have to take into account the presence of strategic interaction in a game‐theoretic setting. The tests take the form of linear programs, the solutions to which also allow us to recover cost information on the firms. To check that these nonparametric tests are sufficiently discriminating to reject real data, we apply them to the market for crude oil.  相似文献   

18.
Methods to Approximate Joint Uncertainty and Variability in Risk   总被引:3,自引:0,他引:3  
As interest in quantitative analysis of joint uncertainty and interindividual variability (JUV) in risk grows, so does the need for related computational shortcuts. To quantify JUV in risk, Monte Carlo methods typically require nested sampling of JUV in distributed inputs, which is cumbersome and time-consuming. Two approximation methods proposed here allow simpler and more rapid analysis. The first consists of new upper-bound JUV estimators that involve only uncertainty or variability, not both, and so never require nested sampling to calculate. The second is a discrete-probability-calculus procedure that uses only the mean and one upper-tail mean for each input in order to estimate mean and upper-bound risk, which procedure is simpler and more intuitive than similar ones in use. Application of these methods is illustrated in an assessment of cancer risk from residential exposures to chloroform in Kanawah Valley, West Virginia. Because each of the multiple exposure pathways considered in this assessment had separate modeled sources of uncertainty and variability, the assessment illustrates a realistic case where a standard Monte Carlo approach to JUV analysis requires nested sampling. In the illustration, the first proposed method quantified JUV in cancer risk much more efficiently than corresponding nested Monte Carlo calculations. The second proposed method also nearly duplicated JUV-related and other estimates of risk obtained using Monte Carlo methods. Both methods were thus found adequate to obtain basic risk estimates accounting for JUV in a realistically complex risk assessment. These methods make routine JUV analysis more convenient and practical.  相似文献   

19.
Toxicologists are often interested in assessing the joint effect of an exposure on multiple reproductive endpoints, including early loss, fetal death, and malformation. Exposures that occur prior to mating or extremely early in development can adversely affect the number of implantation sites or fetuses that form within each dam and may even prevent pregnancy. A simple approach for assessing overall adverse effects in such studies is to consider fetuses or implants that fail to develop due to exposure as missing data. The missing data can be imputed, and standard methods for the analysis of quantal response data can then be used for quantitative risk assessment or testing. In this article, a new bias-corrected imputation procedure is proposed and evaluated. The procedure is straightforward to implement in standard statistical packages and has excellent operating characteristics when used in combination with a marginal model fit with generalized estimating equations. The methods are applied to data from a reproductive toxicity study of Nitrofurazone conducted by the National Toxicology Program.  相似文献   

20.
关于我国期货市场弱式有效性的研究   总被引:1,自引:0,他引:1  
张小艳  张宗成 《管理工程学报》2007,21(1):145-147,154
由于金融价格遵循随机游走蕴涵着市场呈弱式有效,而单位根的存在仅是随机游走的必要条件,故运用这一含义,本文利用单位根检验与自相关检验的结合,并同时利用方差比检验和多重方差比检验来对随机游走假设进行实证研究,目的在于探讨国内铜、大豆、小麦三大期货市场是否呈弱式有效态势.结果显示:各种检验方法得出的结论是一致的,即铜、大豆、小麦三大期货市场的对数期货价格序列符合随机游走假设.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号