首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for it. This paper studies nonparametric identifiability of type probabilities and type‐specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in applied work under different assumptions on the Markov property, stationarity, and type‐invariance in the transition process. Three elements emerge as the important determinants of identification: the time‐dimension of panel data, the number of values the covariates can take, and the heterogeneity of the response of different types to changes in the covariates. For example, in a simple case where the transition function is type‐invariant, a time‐dimension of T = 3 is sufficient for identification, provided that the number of values the covariates can take is no smaller than the number of types and that the changes in the covariates induce sufficiently heterogeneous variations in the choice probabilities across types. Identification is achieved even when state dependence is present if a model is stationary first‐order Markovian and the panel has a moderate time‐dimension (T 6).  相似文献   

2.
Multistage clonal expansion (MSCE) models of carcinogenesis are continuous‐time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous‐time Markov processes.  相似文献   

3.
Consider a group consisting of S members facing a common budget constraint p'ξ=1: any demand vector belonging to the budget set can be (privately or publicly) consumed by the members. Although the intragroup decision process is not known, it is assumed to generate Pareto‐efficient outcomes; neither individual consumptions nor intragroup transfers are observable. The paper analyzes when, to what extent, and under which conditions it is possible to recover the underlying structure—individual preferences and the decision process—from the group's aggregate behavior. We show that the general version of the model is not identified. However, a simple exclusion assumption (whereby each member does not consume at least one good) is sufficient to guarantee generic identifiability of the welfare‐relevant structural concepts.  相似文献   

4.
Cyclic inventory is the buffer following a machine that cycles over a set of products, each of which is subsequently consumed in a continuous manner. Scheduling such a machine is interesting when the changeover times from one product to another are non‐trivial—which is generally the case. This problem has a substantial literature, but the common practices of “lot‐splitting” and “maximization of utilization” suggest that many practitioners still do not fully understand the principles of cyclic inventory. This paper is a tutorial that demonstrates those principles. We show that cyclic inventory is directly proportional to cycle length, which in turn is directly proportional to total changeover time, and inversely proportional to machine utilization. We demonstrate the virtue of “maximum changeover policies” in minimizing cyclic inventory—and the difficulty in making the transition to an increased level of demand. In so doing, we explicate the different roles of cyclic inventory, transitional inventory, and safety stock. We demonstrate the interdependence of the products in the cycle—the lot‐size for one product cannot be set independently of the remaining products. We also give necessary conditions for consideration of improper schedules (i.e., where a product can appear more than once in the cycle), and demonstrate that both lot‐splitting and maximization of utilization are devastatingly counter‐productive when changeover time is non‐trivial.  相似文献   

5.
We consider scheduling issues at Beyçelik, a Turkish automotive stamping company that uses presses to give shape to metal sheets in order to produce auto parts. The problem concerns the minimization of the total completion time of job orders (i.e., makespan) during a planning horizon. This problem may be classified as a combined generalized flowshop and flexible flowshop problem with special characteristics. We show that the Stamping Scheduling Problem is NP‐Hard. We develop an integer programming‐based method to build realistic and usable schedules. Our results show that the proposed method is able to find higher quality schedules (i.e., shorter makespan values) than both the company's current process and a model from the literature. However, the proposed method has a relatively long run time, which is not practical for the company in situations when a (new) schedule is needed quickly (e.g., when there is a machine breakdown or a rush order). To improve the solution time, we develop a second method that is inspired by decomposition. We show that the second method provides higher‐quality solutions—and in most cases optimal solutions—in a shorter time. We compare the performance of all three methods with the company's schedules. The second method finds a solution in minutes compared to Beyçelik's current process, which takes 28 hours. Further, the makespan values of the second method are about 6.1% shorter than the company's schedules. We estimate that the company can save over €187,000 annually by using the second method. We believe that the models and methods developed in this study can be used in similar companies and industries.  相似文献   

6.
Abstract. The Portuguese economy has been characterized by modernization since the post‐war period, and Lisbon is a centre of this process. This paper analyses rates of return on human capital in Lisbon versus the rest of the country in the period 1982–92. An assignment model of heterogeneous workers to heterogeneous jobs is applied. We introduce the concept of the complexity dispersion parameter, which measures job heterogeneity and the ease of substitution between worker types. It is free dimension and can be compared across countries. We also develop a cookbook recipe for the estimation of this parameter. The main implication of the model — a high return to human capital is associated with similar workers being assigned to more complex jobs — is confirmed by the data. The complexity dispersion parameter suggests that paying half of the optimal wage level at least doubles the cost per efficiency unit of labour.  相似文献   

7.
Supply disruptions are all too common in supply chains. To mitigate delivery risk, buyers may either source from multiple suppliers or offer incentives to their preferred supplier to improve its process reliability. These incentives can be either direct (investment subsidy) or indirect (inflated order quantity). In this study, we present a series of models to highlight buyers’ and suppliers’ optimal parameter choices. Our base‐case model has deterministic buyer demand and two possibilities for the supplier yield outcomes: all‐or‐nothing supply or partial disruption. For the all‐or‐nothing model, we show that the buyer prefers to only use the subsidy option, which obviates the need to inflate order quantity. However, in the partial disruption model, both incentives—subsidy and order inflation—may be used at the same time. Although single sourcing provides greater indirect incentive to the selected supplier because that avoids order splitting, we show that the buyer may prefer the diversification strategy under certain circumstances. We also quantify the amount by which the wholesale price needs to be discounted (if at all) to ensure that dual sourcing strategy dominates sole sourcing. Finally, we extend the model to the case of stochastic demand. Structural properties of ordering/subsidy decisions are derived for the all‐or‐nothing model, and in contrast to the deterministic demand case, we establish that the buyer may increase use of subsidy and order quantity at the same time.  相似文献   

8.
This paper derives optimal inheritance tax formulas that capture the key equity‐efficiency trade‐off, are expressed in terms of estimable sufficient statistics, and are robust to the underlying structure of preferences. We consider dynamic stochastic models with general and heterogeneous bequest tastes and labor productivities. We limit ourselves to simple but realistic linear or two‐bracket tax structures to obtain tractable formulas. We show that long‐run optimal inheritance tax rates can always be expressed in terms of aggregate earnings and bequest elasticities with respect to tax rates, distributional parameters, and social preferences for redistribution. Those results carry over with tractable modifications to (a) the case with social discounting (instead of steady‐state welfare maximization), (b) the case with partly accidental bequests, (c) the standard Barro–Becker dynastic model. The optimal tax rate is positive and quantitatively large if the elasticity of bequests to the tax rate is low, bequest concentration is high, and society cares mostly about those receiving little inheritance. We propose a calibration using micro‐data for France and the United States. We find that, for realistic parameters, the optimal inheritance tax rate might be as large as 50%–60%—or even higher for top bequests, in line with historical experience.  相似文献   

9.
Prediction error identification methods have been recently the objects of much study, and have wide applicability. The maximum likelihood (ML) identification methods for Gaussian models and the least squares prediction error method (LSPE) are special cases of the general approach. In this paper, we investigate conditions for distinguishability or identifiability of multivariate random processes, for both continuous and discrete observation time T. We consider stationary stochastic processes, for the ML and LSPE methods, and for large observation interval T, we resolve the identifiability question. Our analysis begins by considering stationary autoregressive moving average models, but the conclusions apply for general stationary, stable vector models. The limiting value for T → ∞ of the criterion function is evaluated, and it is viewed as a distance measure in the parameter space of the model. The main new result of this paper is to specify the equivalence classes of stationary models that achieve the global minimization of the above distance measure, and hence to determine precisely the classes of models that are not identifiable from each other. The new conclusions are useful for parameterizing multivariate stationary models in system identification problems. Relationships to previously discovered identifiability conditions are discussed.  相似文献   

10.
Many models of exposure-related carcinogenesis, including traditional linearized multistage models and more recent two-stage clonal expansion (TSCE) models, belong to a family of models in which cells progress between successive stages-possibly undergoing proliferation at some stages-at rates that may depend (usually linearly) on biologically effective doses. Biologically effective doses, in turn, may depend nonlinearly on administered doses, due to PBPK nonlinearities. This article provides an exact mathematical analysis of the expected number of cells in the last ("malignant") stage of such a "multistage clonal expansion" (MSCE) model as a function of dose rate and age. The solution displays symmetries such that several distinct sets of parameter values provide identical fits to all epidemiological data, make identical predictions about the effects on risk of changes in exposure levels or timing, and yet make significantly different predictions about the effects on risk of changes in the composition of exposure that affect the pharmacodynamic dose-response relation. Several different predictions for the effects of such an intervention (such as reducing carcinogenic constituents of an exposure) that acts on only one or a few stages of the carcinogenic process may be equally consistent with all preintervention epidemiological data. This is an example of nonunique identifiability of model parameters and predictions from data. The new results on nonunique model identifiability presented here show that the effects of an intervention on changing age-specific cancer risks in an MSCE model can be either large or small, but that which is the case cannot be predicted from preintervention epidemiological data and knowledge of biological effects of the intervention alone. Rather, biological data that identify which rate parameters hold for which specific stages are required to obtain unambiguous predictions. From epidemiological data alone, only a set of equally likely alternative predictions can be made for the effects on risk of such interventions.  相似文献   

11.
Quality issues in milk—arising primarily from deliberate adulteration by producers—have been reported in several developing countries. In the milk supply chain, a station buys raw milk from a number of producers, mixes the milk and sells it to a firm (that then sells the processed milk to end consumers). We study a non‐cooperative game between a station and a population of producers. Apart from penalties on proven low‐quality producers, two types of incentives are analyzed: confessor rewards for low‐quality producers who confess and quality rewards for producers of high‐quality milk. Contrary to our expectations, whereas (small) confessor rewards can help increase both the quality of milk and the station's profit, quality rewards can be detrimental. We examine two structures based on the ordering of individual and mixed testing of milk: pre‐mixed individual testing (first test a fraction of producers individually and then [possibly] perform a mixed test on the remaining producers) and post‐mixed individual testing (first test the mixed milk from all producers and then test a fraction of producers individually). Whereas pre‐mixed individual testing can be socially harmful, a combination of post‐mixed individual testing and other incentives achieves a desirable outcome: all producers supply high‐quality milk with only one mixed test and no further testing by the station.  相似文献   

12.
Groundwater leakage into subsurface constructions can cause reduction of pore pressure and subsidence in clay deposits, even at large distances from the location of the construction. The potential cost of damage is substantial, particularly in urban areas. The large‐scale process also implies heterogeneous soil conditions that cannot be described in complete detail, which causes a need for estimating uncertainty of subsidence with probabilistic methods. In this study, the risk for subsidence is estimated by coupling two probabilistic models, a geostatistics‐based soil stratification model with a subsidence model. Statistical analyses of stratification and soil properties are inputs into the models. The results include spatially explicit probabilistic estimates of subsidence magnitude and sensitivities of included model parameters. From these, areas with significant risk for subsidence are distinguished from low‐risk areas. The efficiency and usefulness of this modeling approach as a tool for communication to stakeholders, decision support for prioritization of risk‐reducing measures, and identification of the need for further investigations and monitoring are demonstrated with a case study of a planned tunnel in Stockholm.  相似文献   

13.
While scientific studies may help conflicting stakeholders come to agreement on a best management option or policy, often they do not. We review the factors affecting trust in the efficacy and objectivity of scientific studies in an analytical‐deliberative process where conflict is present, and show how they may be incorporated in an extension to the traditional Bayesian decision model. The extended framework considers stakeholders who differ in their prior beliefs regarding the probability of possible outcomes (in particular, whether a proposed technology is hazardous), differ in their valuations of these outcomes, and differ in their assessment of the ability of a proposed study to resolve the uncertainty in the outcomes and their hazards—as measured by their perceived false positive and false negative rates for the study. The Bayesian model predicts stakeholder‐specific preposterior probabilities of consensus, as well as pathways for increasing these probabilities, providing important insights into the value of scientific information in an analytic‐deliberative decision process where agreement is sought. It also helps to identify the interactions among perceived risk and benefit allocations, scientific beliefs, and trust in proposed scientific studies when determining whether a consensus can be achieved. The article provides examples to illustrate the method, including an adaptation of a recent decision analysis for managing the health risks of electromagnetic fields from high voltage transmission lines.  相似文献   

14.
We examine the effect of a hospital's objective (i.e., non‐profit vs. for‐profit) in hospital markets for elective care. Using game‐theoretic analysis and queueing models to capture the operational performance of hospitals, we compare the equilibrium behavior of three market settings in terms of such criteria as waiting times and patient costs from waiting and hospital payments. In the first setting, a monopoly, patients are served exclusively by a single non‐profit hospital; in the second, a homogeneous duopoly, patients are served by two competing non‐profit hospitals. In our third setting, a heterogeneous duopoly, the market is served by one non‐profit hospital and one for‐profit hospital. A non‐profit hospital provides free care to patients, although they may have to wait; for‐profit hospitals charge a fee to provide care with minimal waiting. A comparison between the monopolistic and each of the duopolistic settings reveals that the introduction of competition can hamper a hospital's ability to attain economies of scale and can also increase waiting times. Moreover, the presence of a for‐profit sector may be desirable only when the hospital market is sufficiently competitive. A comparison across the duopolistic settings indicates that the choice between homogeneous and heterogeneous competition depends on the patients' willingness to wait before receiving care and the reimbursement level of the non‐profit sector. When the public funder is not financially constrained, the presence of a for‐profit sector may allow the funder to lower both the financial costs of providing coverage and the total costs to patients. Finally, our analysis suggests that the public funder should exercise caution when using policy tools that support the for‐profit sector—for example, patient subsidies—because such tools may increase patient costs in the long run; it might be preferable to raise the non‐profit sector's level of reimbursement.  相似文献   

15.
《Risk analysis》2018,38(4):777-794
The basic assumptions of the Cox proportional hazards regression model are rarely questioned. This study addresses whether hazard ratio, i.e., relative risk (RR), estimates using the Cox model are biased when these assumptions are violated. We investigated also the dependence of RR estimates on temporal exposure characteristics, and how inadequate control for a strong, time‐dependent confounder affects RRs for a modest, correlated risk factor. In a realistic cohort of 500,000 adults constructed using the National Cancer Institute Smoking History Generator, we used the Cox model with increasing control of smoking to examine the impact on RRs for smoking and a correlated covariate X. The smoking‐associated RR was strongly modified by age. Pack‐years of smoking did not sufficiently control for its effects; simultaneous control for effect modification by age and time‐dependent cumulative exposure, exposure duration, and time since cessation improved model fit. Even then, residual confounding was evident in RR estimates for covariate X, for which spurious RRs ranged from 0.980 to 1.017 per unit increase. Use of the Cox model to control for a time‐dependent strong risk factor yields unreliable RR estimates unless detailed, time‐varying information is incorporated in analyses. Notwithstanding, residual confounding may bias estimated RRs for a modest risk factor.  相似文献   

16.
We analyze a dynamic stochastic general‐equilibrium (DSGE) model with an externality—through climate change—from using fossil energy. Our central result is a simple formula for the marginal externality damage of emissions (or, equivalently, for the optimal carbon tax). This formula, which holds under quite plausible assumptions, reveals that the damage is proportional to current GDP, with the proportion depending only on three factors: (i) discounting, (ii) the expected damage elasticity (how many percent of the output flow is lost from an extra unit of carbon in the atmosphere), and (iii) the structure of carbon depreciation in the atmosphere. Thus, the stochastic values of future output, consumption, and the atmospheric CO2 concentration, as well as the paths of technology (whether endogenous or exogenous) and population, and so on, all disappear from the formula. We find that the optimal tax should be a bit higher than the median, or most well‐known, estimates in the literature. We also formulate a parsimonious yet comprehensive and easily solved model allowing us to compute the optimal and market paths for the use of different sources of energy and the corresponding climate change. We find coal—rather than oil—to be the main threat to economic welfare, largely due to its abundance. We also find that the costs of inaction are particularly sensitive to the assumptions regarding the substitutability of different energy sources and technological progress.  相似文献   

17.
The Protective Action Decision Model (PADM) is a multistage model that is based on findings from research on people's responses to environmental hazards and disasters. The PADM integrates the processing of information derived from social and environmental cues with messages that social sources transmit through communication channels to those at risk. The PADM identifies three critical predecision processes (reception, attention, and comprehension of warnings or exposure, attention, and interpretation of environmental/social cues)—that precede all further processing. The revised model identifies three core perceptions—threat perceptions, protective action perceptions, and stakeholder perceptions—that form the basis for decisions about how to respond to an imminent or long‐term threat. The outcome of the protective action decision‐making process, together with situational facilitators and impediments, produces a behavioral response. In addition to describing the revised model and the research on which it is based, this article describes three applications (development of risk communication programs, evacuation modeling, and adoption of long‐term hazard adjustments) and identifies some of the research needed to address unresolved issues.  相似文献   

18.
The risks from singular natural hazards such as a hurricane have been extensively investigated in the literature. However, little is understood about how individual and collective responses to repeated hazards change communities and impact their preparation for future events. Individual mitigation actions may drive how a community's resilience evolves under repeated hazards. In this paper, we investigate the effect that learning by homeowners can have on household mitigation decisions and on how this influences a region's vulnerability to natural hazards over time, using hurricanes along the east coast of the United States as our case study. To do this, we build an agent-based model (ABM) to simulate homeowners’ adaptation to repeated hurricanes and how this affects the vulnerability of the regional housing stock. Through a case study, we explore how different initial beliefs about the hurricane hazard and how the memory of recent hurricanes could change a community's vulnerability both under current and potential future hurricane scenarios under climate change. In some future hurricane environments, different initial beliefs can result in large differences in the region's long-term vulnerability to hurricanes. We find that when some homeowners mitigate soon after a hurricane—when their memory of the event is the strongest—it can help to substantially decrease the vulnerability of a community.  相似文献   

19.
Is corruption systematically related to electoral rules? Recent theoretical work suggests a positive answer. But little is known about the data. We try to address this lacuna by relating corruption to different features of the electoral system in a sample of about eighty democracies in the 1990s. We exploit the cross‐country variation in the data, as well as the time variation arising from recent episodes of electoral reform. The evidence is consistent with the theoretical priors. Larger voting districts—and thus lower barriers to entry—are associated with less corruption, whereas larger shares of candidates elected from party lists—and thus less individual accountability—are associated with more corruption. Individual accountability appears to be most strongly tied to personal ballots in plurality‐rule elections, even though open party lists also seem to have some effect. Because different aspects roughly offset each other, a switch from strictly proportional to strictly majoritarian elections only has a small negative effect on corruption. (JEL: E62, H3)  相似文献   

20.
This paper develops theoretical foundations for an error analysis of approximate equilibria in dynamic stochastic general equilibrium models with heterogeneous agents and incomplete financial markets. While there are several algorithms that compute prices and allocations for which agents' first‐order conditions are approximately satisfied (“approximate equilibria”), there are few results on how to interpret the errors in these candidate solutions and how to relate the computed allocations and prices to exact equilibrium allocations and prices. We give a simple example to illustrate that approximate equilibria might be very far from exact equilibria. We then interpret approximate equilibria as equilibria for close‐by economies; that is, for economies with close‐by individual endowments and preferences. We present an error analysis for two models that are commonly used in applications, an overlapping generations (OLG) model with stochastic production and an asset pricing model with infinitely lived agents. We provide sufficient conditions that ensure that approximate equilibria are close to exact equilibria of close‐by economies. Numerical examples illustrate the analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号