首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 32 毫秒
1.
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the “true” value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved “true” variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the “true,” unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach.  相似文献   

2.
We consider nonparametric identification and estimation in a nonseparable model where a continuous regressor of interest is a known, deterministic, but kinked function of an observed assignment variable. We characterize a broad class of models in which a sharp “Regression Kink Design” (RKD or RK Design) identifies a readily interpretable treatment‐on‐the‐treated parameter (Florens, Heckman, Meghir, and Vytlacil (2008)). We also introduce a “fuzzy regression kink design” generalization that allows for omitted variables in the assignment rule, noncompliance, and certain types of measurement errors in the observed values of the assignment variable and the policy variable. Our identifying assumptions give rise to testable restrictions on the distributions of the assignment variable and predetermined covariates around the kink point, similar to the restrictions delivered by Lee (2008) for the regression discontinuity design. Using a kink in the unemployment benefit formula, we apply a fuzzy RKD to empirically estimate the effect of benefit rates on unemployment durations in Austria.  相似文献   

3.
We propose a new and flexible nonparametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple‐to‐implement set of estimating equations associated with the compensator for the jump measure, or its intensity, that only utilizes the weak assumption of regular variation in the jump tails, along with in‐fill asymptotic arguments for directly estimating the “large” jumps. The procedure assumes that the large‐sized jumps are identically distributed, but otherwise allows for very general dynamic dependencies in jump occurrences, and, importantly, does not restrict the behavior of the “small” jumps or the continuous part of the process and the temporal variation in the stochastic volatility. On implementing the new estimation procedure with actual high‐frequency data for the S&P 500 aggregate market portfolio, we find strong evidence for richer and more complex dynamic dependencies in the jump tails than hitherto entertained in the literature.  相似文献   

4.
Suppose that each player in a game is rational, each player thinks the other players are rational, and so on. Also, suppose that rationality is taken to incorporate an admissibility requirement—that is, the avoidance of weakly dominated strategies. Which strategies can be played? We provide an epistemic framework in which to address this question. Specifically, we formulate conditions of rationality and mth‐order assumption of rationality (RmAR) and rationality and common assumption of rationality (RCAR). We show that (i) RCAR is characterized by a solution concept we call a “self‐admissible set”; (ii) in a “complete” type structure, RmAR is characterized by the set of strategies that survive m+1 rounds of elimination of inadmissible strategies; (iii) under certain conditions, RCAR is impossible in a complete structure.  相似文献   

5.
The protection and safe operations of power systems heavily rely on the identification of the causes of damage and service disruption. This article presents a general framework for the assessment of power system vulnerability to malicious attacks. The concept of susceptibility to an attack is employed to quantitatively evaluate the degree of exposure of the system and its components to intentional offensive actions. A scenario with two agents having opposing objectives is proposed, i.e., a defender having multiple alternatives of protection strategies for system elements, and an attacker having multiple alternatives of attack strategies against different combinations of system elements. The defender aims to minimize the system susceptibility to the attack, subject to budget constraints; on the other hand, the attacker aims to maximize the susceptibility. The problem is defined as a zero‐sum game between the defender and the attacker. The assumption that the interests of the attacker and the defender are opposite makes it irrelevant whether or not the defender shows the strategy he/she will use. Thus, the approaches “leader–follower game” or “simultaneous game” do not provide differences as far as the results are concerned. The results show an example of such a situation, and the von Neumann theorem is applied to find the (mixed) equilibrium strategies of the attacker and of the defender.  相似文献   

6.
A challenge for large‐scale environmental health investigations such as the National Children's Study (NCS), is characterizing exposures to multiple, co‐occurring chemical agents with varying spatiotemporal concentrations and consequences modulated by biochemical, physiological, behavioral, socioeconomic, and environmental factors. Such investigations can benefit from systematic retrieval, analysis, and integration of diverse extant information on both contaminant patterns and exposure‐relevant factors. This requires development, evaluation, and deployment of informatics methods that support flexible access and analysis of multiattribute data across multiple spatiotemporal scales. A new “Tiered Exposure Ranking” (TiER) framework, developed to support various aspects of risk‐relevant exposure characterization, is described here, with examples demonstrating its application to the NCS. TiER utilizes advances in informatics computational methods, extant database content and availability, and integrative environmental/exposure/biological modeling to support both “discovery‐driven” and “hypothesis‐driven” analyses. “Tier 1” applications focus on “exposomic” pattern recognition for extracting information from multidimensional data sets, whereas second and higher tier applications utilize mechanistic models to develop risk‐relevant exposure metrics for populations and individuals. In this article, “tier 1” applications of TiER explore identification of potentially causative associations among risk factors, for prioritizing further studies, by considering publicly available demographic/socioeconomic, behavioral, and environmental data in relation to two health endpoints (preterm birth and low birth weight). A “tier 2” application develops estimates of pollutant mixture inhalation exposure indices for NCS counties, formulated to support risk characterization for these endpoints. Applications of TiER demonstrate the feasibility of developing risk‐relevant exposure characterizations for pollutants using extant environmental and demographic/socioeconomic data.  相似文献   

7.
I recently discussed pitfalls in attempted causal inference based on reduced‐form regression models. I used as motivation a real‐world example from a paper by Dr. Sneeringer, which interpreted a reduced‐form regression analysis as implying the startling causal conclusion that “doubling of [livestock] production leads to a 7.4% increase in infant mortality.” This conclusion is based on: (A) fitting a reduced‐form regression model to aggregate (e.g., county‐level) data; and (B) (mis)interpreting a regression coefficient in this model as a causal coefficient, without performing any formal statistical tests for potential causation (such as conditional independence, Granger‐Sims, or path analysis tests). Dr. Sneeringer now adds comments that confirm and augment these deficiencies, while advocating methodological errors that, I believe, risk analysts should avoid if they want to reach logically sound, empirically valid, conclusions about cause and effect. She explains that, in addition to (A) and (B) above, she also performed other steps such as (C) manually selecting specific models and variables and (D) assuming (again, without testing) that hand‐picked surrogate variables are valid (e.g., that log‐transformed income is an adequate surrogate for poverty). In her view, these added steps imply that “critiques of A and B are not applicable” to her analysis and that therefore “a causal argument can be made” for “such a strong, robust correlation” as she believes her regression coefficient indicates. However, multiple wrongs do not create a right. Steps (C) and (D) exacerbate the problem of unjustified causal interpretation of regression coefficients, without rendering irrelevant the fact that (A) and (B) do not provide evidence of causality. This reply focuses on whether any statistical techniques can produce the silk purse of a valid causal inference from the sow's ear of a reduced‐form regression analysis of ecological data. We conclude that Dr. Sneeringer's analysis provides no valid indication that air pollution from livestock operations causes any increase in infant mortality rates. More generally, reduced‐form regression modeling of aggregate population data—no matter how it is augmented by fitting multiple models and hand‐selecting variables and transformations—is not adequate for valid causal inference about health effects caused by specific, but unmeasured, exposures.  相似文献   

8.
The climatic conditions of north temperate countries pose unique influences on the rates of invasion and the potential adverse impacts of non‐native species. Methods are needed to evaluate these risks, beginning with the pre‐screening of non‐native species for potential invasives. Recent improvements to the Fish Invasiveness Scoring Kit (FISK) have provided a means (i.e., FISK v2) of identifying potentially invasive non‐native freshwater fishes in virtually all climate zones. In this study, FISK is applied for the first time in a north temperate country, southern Finland, and calibrated to determine the appropriate threshold score for fish species that are likely to pose a high risk of being invasive in this risk assessment area. The threshold between “medium” and “high” risk was determined to be 22.5, which is slightly higher than the original threshold for the United Kingdom (i.e., 19) and that determined for a FISK application in southern Japan (19.8). This underlines the need to calibrate such decision‐support tools for the different areas where they are employed. The results are evaluated in the context of current management strategies in Finland regarding non‐native fishes.  相似文献   

9.
10.
This paper constructs an efficient, budget‐balanced, Bayesian incentive‐compatible mechanism for a general dynamic environment with quasilinear payoffs in which agents observe private information and decisions are made over countably many periods. First, under the assumption of “private values” (other agents' private information does not directly affect an agent's payoffs), we construct an efficient, ex post incentive‐compatible mechanism, which is not budget‐balanced. Second, under the assumption of “independent types” (the distribution of each agent's private information is not directly affected by other agents' private information), we show how the budget can be balanced without compromising agents' incentives. Finally, we show that the mechanism can be made self‐enforcing when agents are sufficiently patient and the induced stochastic process over types is an ergodic finite Markov chain.  相似文献   

11.
Willingness To Pay (WTP) of customers plays an anchoring role in pricing. This study proposes a new choice model based on WTP, incorporating sequential decision making, where the products with positive utility of purchase are considered in the order of customer preference. We compare WTP‐choice model with the commonly used (multinomial) Logit model with respect to the underlying choice process, information requirement, and independence of irrelevant alternatives. Using WTP‐choice model, we find and compare equilibrium and centrally optimal prices and profits without considering inventory availability. In addition, we compare equilibrium prices and profits in two contexts: without considering inventory availability and under lost sales. One of the interesting results with WTP‐choice model is the “loose coupling” of retailers in competition; prices are not coupled but profits are. That is, each retailer should charge the monopoly price as the collection of these prices constitute an equilibrium but each retailer's profit depends on other retailers' prices. Loose coupling fails with dependence of WTPs or dependence of preference on prices. Also, we show that competition among retailers facing dependent WTPs can cause price cycles under some conditions. We consider real‐life data on sales of yogurt, ketchup, candy melt, and tuna, and check if a version of WTP‐choice model (with uniform, triangle, or shifted exponential WTP distribution), standard or mixed Logit model fits better and predicts the sales better. These empirical tests establish that WTP‐choice model compares well and should be considered as a legitimate alternative to Logit models for studying pricing for products with low price and high frequency of purchase.  相似文献   

12.
In the industry with radical technology push or rapidly changing customer preference, it is firms' common wisdom to introduce high‐end product first, and follow by low‐end product‐line extensions. A key decision in this “down‐market stretch” strategy is the introduction time. High inventory cost is pervasive in such industries, but its impact has long been ignored during the presale planning stage. This study takes a first step toward filling this gap. We propose an integrated inventory (supply) and diffusion (demand) framework and analyze how inventory cost influences the introduction timing of product‐line extensions, considering substitution effect among successive generations. We show that under low inventory cost or frequent replenishment ordering policy, the optimal introduction time indeed follows the well‐known “now or never” rule. However, sequential introduction becomes optimal as the inventory holding gets more substantial or the product life cycle gets shorter. The optimal introduction timing can increase or decrease with the inventory cost depending on the marketplace setting, requiring a careful analysis.  相似文献   

13.
Make‐to‐order (MTO) manufacturers face a common problem of maintaining a desired service level for delivery at a reasonable cost while dealing with irregular customer orders. This research considers a MTO manufacturer who produces a product consisting of several custom parts to be ordered from multiple suppliers. We develop procedures to allocate orders to each supplier for each custom part and calculate the associated replenishment cost as well as the probability of meeting the delivery date, based on the suppliers' jobs on hand, availability, process speed, and defective rate. For a given delivery due date, a frontier of service level and a replenishment cost frontier are created to provide a range of options to meet customer requirements. This method can be further extended to the case when the delivery due date is not fixed and the manufacturer must “crash” its delivery time to compete for customers.  相似文献   

14.
The Petroleum Safety Authority Norway (PSA‐N) has recently adopted a new definition of risk: “the consequences of an activity with the associated uncertainty.” The PSA‐N has also been using “deficient risk assessment” for some time as a basis for assigning nonconformities in audit reports. This creates an opportunity to study the link between risk perspective and risk assessment quality in a regulatory context, and, in the present article, we take a hard look at the term “deficient risk assessment” both normatively and empirically. First, we perform a conceptual analysis of how a risk assessment can be deficient in light of a particular risk perspective consistent with the new PSA‐N risk definition. Then, we examine the usages of the term “deficient” in relation to risk assessments in PSA‐N audit reports and classify these into a set of categories obtained from the conceptual analysis. At an overall level, we were able to identify on what aspects of the risk assessment the PSA‐N is focusing and where deficiencies are being identified in regulatory practice. A key observation is that there is a diversity in how the agency officials approach the risk assessments in audits. Hence, we argue that improving the conceptual clarity of what the authorities characterize as “deficient” in relation to the uncertainty‐based risk perspective may contribute to the development of supervisory practices and, eventually, potentially strengthen the learning outcome of the audit reports.  相似文献   

15.
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson‐distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional “single‐hit” dose‐response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose‐response models in terms of probability generating functions. It is shown formally that the theoretical single‐hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single‐hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single‐hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose‐response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose‐response assessment as well as practical risk characterization are discussed.  相似文献   

16.
Although distributed teams have been researched extensively in information systems and decision science disciplines, a review of the literature suggests that the dominant focus has been on understanding the factors affecting performance at the team level. There has however been an increasing recognition that specific individuals within such teams are often critical to the team's performance. Consequently, existing knowledge about such teams may be enhanced by examining the factors that affect the performance of individual team members. This study attempts to address this need by identifying individuals who emerge as “stars” in globally distributed teams involved in knowledge work such as information systems development (ISD). Specifically, the study takes a knowledge‐centered view in explaining which factors lead to “stardom” in such teams. Further, it adopts a social network approach consistent with the core principles of structural/relational analysis in developing and empirically validating the research model. Data from U.S.–Scandinavia self‐managed “hybrid” teams engaged in systems development were used to deductively test the proposed model. The overall study has several implications for group decision making: (i) the study focuses on stars within distributed teams, who play an important role in shaping group decision making, and emerge as a result of a negotiated/consensual decision making within egalitarian teams; (ii) an examination of emergent stars from the team members’ point of view reflects the collective acceptance and support dimension decision‐making contexts identified in prior literature; (iii) finally, the study suggests that the social network analysis technique using relational data can be a tool for a democratic decision‐making technique within groups.  相似文献   

17.
We study a dynamic economy where credit is limited by insufficient collateral and, as a result, investment and output are too low. In this environment, changes in investor sentiment or market expectations can give rise to credit bubbles, that is, expansions in credit that are backed not by expectations of future profits (i.e., fundamental collateral), but instead by expectations of future credit (i.e., bubbly collateral). Credit bubbles raise the availability of credit for entrepreneurs: this is the crowding‐in effect. However, entrepreneurs must also use some of this credit to cancel past credit: this is the crowding‐out effect. There is an “optimal” bubble size that trades off these two effects and maximizes long‐run output and consumption. The equilibrium bubble size depends on investor sentiment, however, and it typically does not coincide with the “optimal” bubble size. This provides a new rationale for macroprudential policy. A credit management agency (CMA) can replicate the “optimal” bubble by taxing credit when the equilibrium bubble is too high and subsidizing credit when the equilibrium bubble is too low. This leaning‐against‐the‐wind policy maximizes output and consumption. Moreover, the same conditions that make this policy desirable guarantee that a CMA has the resources to implement it.  相似文献   

18.
We develop results for the use of Lasso and post‐Lasso methods to form first‐stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post‐Lasso in the first stage is root‐n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well‐approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic “beta‐min” conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso‐based IV estimator with a data‐driven penalty performs well compared to recently advocated many‐instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso‐based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post‐Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non‐Gaussian, heteroscedastic disturbances that uses a data‐weighted 1‐penalty function. By innovatively using moderate deviation theory for self‐normalized sums, we provide convergence rates for the resulting Lasso and post‐Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that logp = o(n1/3). We also provide a data‐driven method for choosing the penalty level that must be specified in obtaining Lasso and post‐Lasso estimates and establish its asymptotic validity under non‐Gaussian, heteroscedastic disturbances.  相似文献   

19.
Inter‐customer interactions are important to the operation of self‐services in retail settings. More specifically, when self‐service terminals are used as part of customers’ checkout processes in retail operations without the explicit involvement of retailers as the direct service providers, inter‐customer interactions become a significant managerial issue. In this article, we examine the impact of inter‐customer interactions at retail self‐service terminals on customers’ service quality perceptions and repeat purchase intentions at retail stores. We conduct a scenario‐based experimental design (N = 674) using a 2 × 2 factorial design in which inter‐customer interactions are divided into “positive” vs. “negative” and occur during the “waiting” or during the actual “transaction” stages of self‐services at a retail store. We use attribution theory to develop the hypotheses. The results demonstrate that, through their interactions, fellow customers can exert influences on a focal customer's quality perceptions and repeat purchasing intentions toward a retail store. Furthermore, these influences were impacted by how customers attribute blame or assign responsibility toward the retail store. Service operations managers should leverage these interactions by designing into self‐service settings the capacities and interfaces that are best suited for customers’ co‐production of their self‐service experiences.  相似文献   

20.
Water reuse can serve as a sustainable alternative water source for urban areas. However, the successful implementation of large‐scale water reuse projects depends on community acceptance. Because of the negative perceptions that are traditionally associated with reclaimed water, water reuse is often not considered in the development of urban water management plans. This study develops a simulation model for understanding community opinion dynamics surrounding the issue of water reuse, and how individual perceptions evolve within that context, which can help in the planning and decision‐making process. Based on the social amplification of risk framework, our agent‐based model simulates consumer perceptions, discussion patterns, and their adoption or rejection of water reuse. The model is based on the “risk publics” model, an empirical approach that uses the concept of belief clusters to explain the adoption of new technology. Each household is represented as an agent, and parameters that define their behavior and attributes are defined from survey data. Community‐level parameters—including social groups, relationships, and communication variables, also from survey data—are encoded to simulate the social processes that influence community opinion. The model demonstrates its capabilities to simulate opinion dynamics and consumer adoption of water reuse. In addition, based on empirical data, the model is applied to investigate water reuse behavior in different regions of the United States. Importantly, our results reveal that public opinion dynamics emerge differently based on membership in opinion clusters, frequency of discussion, and the structure of social networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号