首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
Many models of exposure-related carcinogenesis, including traditional linearized multistage models and more recent two-stage clonal expansion (TSCE) models, belong to a family of models in which cells progress between successive stages-possibly undergoing proliferation at some stages-at rates that may depend (usually linearly) on biologically effective doses. Biologically effective doses, in turn, may depend nonlinearly on administered doses, due to PBPK nonlinearities. This article provides an exact mathematical analysis of the expected number of cells in the last ("malignant") stage of such a "multistage clonal expansion" (MSCE) model as a function of dose rate and age. The solution displays symmetries such that several distinct sets of parameter values provide identical fits to all epidemiological data, make identical predictions about the effects on risk of changes in exposure levels or timing, and yet make significantly different predictions about the effects on risk of changes in the composition of exposure that affect the pharmacodynamic dose-response relation. Several different predictions for the effects of such an intervention (such as reducing carcinogenic constituents of an exposure) that acts on only one or a few stages of the carcinogenic process may be equally consistent with all preintervention epidemiological data. This is an example of nonunique identifiability of model parameters and predictions from data. The new results on nonunique model identifiability presented here show that the effects of an intervention on changing age-specific cancer risks in an MSCE model can be either large or small, but that which is the case cannot be predicted from preintervention epidemiological data and knowledge of biological effects of the intervention alone. Rather, biological data that identify which rate parameters hold for which specific stages are required to obtain unambiguous predictions. From epidemiological data alone, only a set of equally likely alternative predictions can be made for the effects on risk of such interventions.  相似文献   

2.
If a specific biological mechanism could be determined by which a carcinogen increases lung cancer risk, how might this knowledge be used to improve risk assessment? To explore this issue, we assume (perhaps incorrectly) that arsenic in cigarette smoke increases lung cancer risk by hypermethylating the promoter region of gene p16INK4a, leading to a more rapid entry of altered (initiated) cells into a clonal expansion phase. The potential impact on lung cancer of removing arsenic is then quantified using a three‐stage version of a multistage clonal expansion (MSCE) model. This refines the usual two‐stage clonal expansion (TSCE) model of carcinogenesis by resolving its intermediate or “initiated” cell compartment into two subcompartments, representing experimentally observed “patch” and “field” cells. This refinement allows p16 methylation effects to be represented as speeding transitions of cells from the patch state to the clonally expanding field state. Given these assumptions, removing arsenic might greatly reduce the number of nonsmall cell lung cancer cells (NSCLCs) produced in smokers, by up to two‐thirds, depending on the fraction (between 0 and 1) of the smoking‐induced increase in the patch‐to‐field transition rate prevented if arsenic were removed. At present, this fraction is unknown (and could be as low as zero), but the possibility that it could be high (close to 1) cannot be ruled out without further data.  相似文献   

3.
We discuss the hazard function of the two-mutation clonal expansion model with time-dependent parameters, with particular emphasis on identifiability of the parameters. We explicitly construct identifiable parameter combinations, and illustrate the properties of the hazard function under perturbations of the underlying biological parameters.  相似文献   

4.
This paper develops a simple approximation method for computing equilibrium portfolios in dynamic general equilibrium open economy macro‐models. The method is widely applicable, simple to implement, and gives analytical solutions for equilibrium portfolio positions in any combination or types of asset. It can be used in models with any number of assets, whether markets are complete or incomplete, and can be applied to stochastic dynamic general equilibrium models of any dimension, so long as the model is amenable to a solution using standard approximation methods. We first illustrate the approach using a simple two‐asset endowment economy model, and then show how the results extend to the case of any number of assets and general economic structure.  相似文献   

5.
This paper introduces a general method to convert a model defined by moment conditions that involve both observed and unobserved variables into equivalent moment conditions that involve only observable variables. This task can be accomplished without introducing infinite‐dimensional nuisance parameters using a least favorable entropy‐maximizing distribution. We demonstrate, through examples and simulations, that this approach covers a wide class of latent variables models, including some game‐theoretic models and models with limited dependent variables, interval‐valued data, errors‐in‐variables, or combinations thereof. Both point‐ and set‐identified models are transparently covered. In the latter case, the method also complements the recent literature on generic set‐inference methods by providing the moment conditions needed to construct a generalized method of moments‐type objective function for a wide class of models. Extensions of the method that cover conditional moments, independence restrictions, and some state‐space models are also given.  相似文献   

6.
We introduce a family of generalized‐method‐of‐moments estimators of the parameters of a continuous‐time Markov process observed at random time intervals. The results include strong consistency, asymptotic normality, and a characterization of standard errors. Sampling is at an arrival intensity that is allowed to depend on the underlying Markov process and on the parameter vector to be estimated. We focus on financial applications, including tick‐based sampling, allowing for jump diffusions, regime‐switching diffusions, and reflected diffusions.  相似文献   

7.
The present study investigates U.S. Department of Agriculture inspection records in the Agricultural Quarantine Activity System database to estimate the probability of quarantine pests on propagative plant materials imported from various countries of origin and to develop a methodology ranking the risk of country–commodity combinations based on quarantine pest interceptions. Data collected from October 2014 to January 2016 were used for developing predictive models and validation study. A generalized linear model with Bayesian inference and a generalized linear mixed effects model were used to compare the interception rates of quarantine pests on different country–commodity combinations. Prediction ability of generalized linear mixed effects models was greater than that of generalized linear models. The estimated pest interception probability and confidence interval for each country–commodity combination was categorized into one of four compliance levels: “High,” “Medium,” “Low,” and “Poor/Unacceptable,” Using K‐means clustering analysis. This study presents risk‐based categorization for each country–commodity combination based on the probability of quarantine pest interceptions and the uncertainty in that assessment.  相似文献   

8.
Prediction error identification methods have been recently the objects of much study, and have wide applicability. The maximum likelihood (ML) identification methods for Gaussian models and the least squares prediction error method (LSPE) are special cases of the general approach. In this paper, we investigate conditions for distinguishability or identifiability of multivariate random processes, for both continuous and discrete observation time T. We consider stationary stochastic processes, for the ML and LSPE methods, and for large observation interval T, we resolve the identifiability question. Our analysis begins by considering stationary autoregressive moving average models, but the conclusions apply for general stationary, stable vector models. The limiting value for T → ∞ of the criterion function is evaluated, and it is viewed as a distance measure in the parameter space of the model. The main new result of this paper is to specify the equivalence classes of stationary models that achieve the global minimization of the above distance measure, and hence to determine precisely the classes of models that are not identifiable from each other. The new conclusions are useful for parameterizing multivariate stationary models in system identification problems. Relationships to previously discovered identifiability conditions are discussed.  相似文献   

9.
10.
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose‐response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the “hybrid” method proposed by Crump, two strategies of BMA, including both “maximum likelihood estimation based” and “Markov Chain Monte Carlo based” methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose‐response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose‐response data.  相似文献   

11.
In this article, we study the performance of multi‐echelon inventory systems with intermediate, external product demand in one or more upper echelons. This type of problem is of general interest in inventory theory and of particular importance in supply chain systems with both end‐product demand and spare parts (subassemblies) demand. The multi‐echelon inventory system considered here is a combination of assembly and serial stages with direct demand from more than one node. The aspect of multiple sources of demands leads to interesting inventory allocation problems. The demand and capacity at each node are considered stochastic in nature. A fixed supply and manufacturing lead time is used between the stages. We develop mathematical models for these multi‐echelon systems, which describe the inventory dynamics and allow simulation of the system. A simulation‐based inventory optimization approach is developed to search for the best base‐stock levels for these systems. The gradient estimation technique of perturbation analysis is used to derive sample‐path estimators. We consider four allocation schemes: lexicographic with priority to intermediate demand, lexiographic with priority to downstream demand, predetermined proportional allocation, and proportional allocation. Based on the numerical results we find that no single allocation policy is appropriate under all conditions. Depending on the combinations of variability and utilization we identify conditions under which use of certain allocation polices across the supply chain result in lower costs. Further, we determine how selection of an inappropriate allocation policy in the presence of scarce on‐hand inventory could result in downstream nodes facing acute shortages. Consequently we provide insight on why good allocation policies work well under differing sets of operating conditions.  相似文献   

12.
The ability to accurately forecast and control inpatient census, and thereby workloads, is a critical and long‐standing problem in hospital management. The majority of current literature focuses on optimal scheduling of inpatients, but largely ignores the process of accurate estimation of the trajectory of patients throughout the treatment and recovery process. The result is that current scheduling models are optimizing based on inaccurate input data. We developed a Clustering and Scheduling Integrated (CSI) approach to capture patient flows through a network of hospital services. CSI functions by clustering patients into groups based on similarity of trajectory using a novel semi‐Markov model (SMM)‐based clustering scheme, as opposed to clustering by patient attributes as in previous literature. Our methodology is validated by simulation and then applied to real patient data from a partner hospital where we demonstrate that it outperforms a suite of well‐established clustering methods. Furthermore, we demonstrate that extant optimization methods achieve significantly better results on key hospital performance measures under CSI, compared with traditional estimation approaches, increasing elective admissions by 97% and utilization by 22% compared to 30% and 8% using traditional estimation techniques. From a theoretical standpoint, the SMM‐clustering is a novel approach applicable to any temporal‐spatial stochastic data that is prevalent in many industries and application areas.  相似文献   

13.
This paper develops characterizations of identified sets of structures and structural features for complete and incomplete models involving continuous or discrete variables. Multiple values of unobserved variables can be associated with particular combinations of observed variables. This can arise when there are multiple sources of heterogeneity, censored or discrete endogenous variables, or inequality restrictions on functions of observed and unobserved variables. The models generalize the class of incomplete instrumental variable (IV) models in which unobserved variables are single‐valued functions of observed variables. Thus the models are referred to as generalized IV (GIV) models, but there are important cases in which instrumental variable restrictions play no significant role. Building on a definition of observational equivalence for incomplete models the development uses results from random set theory that guarantee that the characterizations deliver sharp bounds, thereby dispensing with the need for case‐by‐case proofs of sharpness. The use of random sets defined on the space of unobserved variables allows identification analysis under mean and quantile independence restrictions on the distributions of unobserved variables conditional on exogenous variables as well as under a full independence restriction. The results are used to develop sharp bounds on the distribution of valuations in an incomplete model of English auctions, improving on the pointwise bounds available until now. Application of many of the results of the paper requires no familiarity with random set theory.  相似文献   

14.
In this study, we develop an analytical framework for personalizing the anticoagulation therapy of patients who are taking warfarin. Consistent with medical practice, our treatment design consists of two stages: (i) the initiation stage, modeled using a partially‐observable Markov decision process, during which the physician learns through systematic belief updates about the unobservable patient sensitivity to warfarin, and (ii) the maintenance stage, modeled using a Markov decision process, during which the physician relies on his formed belief about patient sensitivity to determine the stable, patient‐specific, warfarin dose to prescribe. We develop an expression for belief updates in the POMDP, establish the optimality of the myopic policy for the MDP, and derive conditions for the existence and uniqueness of a myopically optimal dose. We validate our models using a real‐life patient data set gathered at the Hematology Clinic of the Jewish General Hospital in Montreal. The proposed analytical framework and case study enable us to develop useful clinical insights, for example, concerning the length of the initiation period and the importance of correctly assessing patient sensitivity.  相似文献   

15.
A challenge with multiple chemical risk assessment is the need to consider the joint behavior of chemicals in mixtures. To address this need, pharmacologists and toxicologists have developed methods over the years to evaluate and test chemical interaction. In practice, however, testing of chemical interaction more often comprises ad hoc binary combinations and rarely examines higher order combinations. One explanation for this practice is the belief that there are simply too many possible combinations of chemicals to consider. Indeed, under stochastic conditions the possible number of chemical combinations scales geometrically as the pool of chemicals increases. However, the occurrence of chemicals in the environment is determined by factors, economic in part, which favor some chemicals over others. We investigate methods from the field of biogeography, originally developed to study avian species co‐occurrence patterns, and adapt these approaches to examine chemical co‐occurrence. These methods were applied to a national survey of pesticide residues in 168 child care centers from across the country. Our findings show that pesticide co‐occurrence in the child care center was not random but highly structured, leading to the co‐occurrence of specific pesticide combinations. Thus, ecological studies of species co‐occurrence parallel the issue of chemical co‐occurrence at specific locations. Both are driven by processes that introduce structure in the pattern of co‐occurrence. We conclude that the biogeographical tools used to determine when this structure occurs in ecological studies are relevant to evaluations of pesticide mixtures for exposure and risk assessment.  相似文献   

16.
Critical infrastructure systems must be both robust and resilient in order to ensure the functioning of society. To improve the performance of such systems, we often use risk and vulnerability analysis to find and address system weaknesses. A critical component of such analyses is the ability to accurately determine the negative consequences of various types of failures in the system. Numerous mathematical and simulation models exist that can be used to this end. However, there are relatively few studies comparing the implications of using different modeling approaches in the context of comprehensive risk analysis of critical infrastructures. In this article, we suggest a classification of these models, which span from simple topologically‐oriented models to advanced physical‐flow‐based models. Here, we focus on electric power systems and present a study aimed at understanding the tradeoffs between simplicity and fidelity in models used in the context of risk analysis. Specifically, the purpose of this article is to compare performance estimates achieved with a spectrum of approaches typically used for risk and vulnerability analysis of electric power systems and evaluate if more simplified topological measures can be combined using statistical methods to be used as a surrogate for physical flow models. The results of our work provide guidance as to appropriate models or combinations of models to use when analyzing large‐scale critical infrastructure systems, where simulation times quickly become insurmountable when using more advanced models, severely limiting the extent of analyses that can be performed.  相似文献   

17.
The use of benchmark dose (BMD) calculations for dichotomous or continuous responses is well established in the risk assessment of cancer and noncancer endpoints. In some cases, responses to exposure are categorized in terms of ordinal severity effects such as none, mild, adverse, and severe. Such responses can be assessed using categorical regression (CATREG) analysis. However, while CATREG has been employed to compare the benchmark approach and the no‐adverse‐effect‐level (NOAEL) approach in determining a reference dose, the utility of CATREG for risk assessment remains unclear. This study proposes a CATREG model to extend the BMD approach to ordered categorical responses by modeling severity levels as censored interval limits of a standard normal distribution. The BMD is calculated as a weighted average of the BMDs obtained at dichotomous cutoffs for each adverse severity level above the critical effect, with the weights being proportional to the reciprocal of the expected loss at the cutoff under the normal probability model. This approach provides a link between the current BMD procedures for dichotomous and continuous data. We estimate the CATREG parameters using a Markov chain Monte Carlo simulation procedure. The proposed method is demonstrated using examples of aldicarb and urethane, each with several categories of severity levels. Simulation studies comparing the BMD and BMDL (lower confidence bound on the BMD) using the proposed method to the correspondent estimates using the existing methods for dichotomous and continuous data are quite compatible; the difference is mainly dependent on the choice of cutoffs for the severity levels.  相似文献   

18.
本文采用CGMY和GIG过程对非高斯OU随机波动率模型进行扩展,建立连续叠加Lévy过程驱动的非高斯OU随机波动率模型,并给出模型的散粒噪声(Shot-Noise)表现方式与近似。在此基础上,为了反映的波动率相关性,本文把回顾抽样(Retrospective Sampling)方法扩展到连续叠加的Lévy过程驱动的非高斯OU随机波动模型中,设计了Lévy过程驱动的非高斯OU随机波动模型的贝叶斯参数统计推断方法。最后,采用金融市场实际数据对不同模型和参数估计方法进行验证和比较研究。本文理论和实证研究均表明采用CGMY和GIG过程对非高斯OU随机波动率模型进行扩展之后,模型的绩效得到明显提高,更能反映金融资产收益率波动率变化特征,本文设计的Lévy过程驱动的非高斯OU随机波动模型的贝叶斯参数统计推断方法效率也较高,克服了已有研究的不足。同时,实证研究发现上证指数收益率和波动率跳跃的特征以及波动率序列具有明显的长记忆特性。  相似文献   

19.
This paper formulates and estimates multistage production functions for children's cognitive and noncognitive skills. Skills are determined by parental environments and investments at different stages of childhood. We estimate the elasticity of substitution between investments in one period and stocks of skills in that period to assess the benefits of early investment in children compared to later remediation. We establish nonparametric identification of a general class of production technologies based on nonlinear factor models with endogenous inputs. A by‐product of our approach is a framework for evaluating childhood and schooling interventions that does not rely on arbitrarily scaled test scores as outputs and recognizes the differential effects of the same bundle of skills in different tasks. Using the estimated technology, we determine optimal targeting of interventions to children with different parental and personal birth endowments. Substitutability decreases in later stages of the life cycle in the production of cognitive skills. It is roughly constant across stages of the life cycle in the production of noncognitive skills. This finding has important implications for the design of policies that target the disadvantaged. For most configurations of disadvantage it is optimal to invest relatively more in the early stages of childhood than in later stages.  相似文献   

20.
A unifying framework to test for causal effects in nonlinear models is proposed. We consider a generalized linear‐index regression model with endogenous regressors and no parametric assumptions on the error disturbances. To test the significance of the effect of an endogenous regressor, we propose a statistic that is a kernel‐weighted version of the rank correlation statistic (tau) of Kendall (1938). The semiparametric model encompasses previous cases considered in the literature (continuous endogenous regressors (Blundell and Powell (2003)) and a single binary endogenous regressor (Vytlacil and Yildiz (2007))), but the testing approach is the first to allow for (i) multiple discrete endogenous regressors, (ii) endogenous regressors that are neither discrete nor continuous (e.g., a censored variable), and (iii) an arbitrary “mix” of endogenous regressors (e.g., one binary regressor and one continuous regressor).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号