首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
While large models based on a deterministic-reductionist philosophy have an important part to play in environmental research, it is advantageous to consider alternative modelling methodologies which overtly acknowledge the poorly defined and uncertain nature of most environmental systems. The paper discusses this topic and presents an integrated statistical modelling procedure which involves three main methodological tools: uncertainty and sensitivity studies based on Monte Carlo simulation techniques; dominant mode analysis using a new method of combined linearization and model-order reduction; and data-based mechanistic modelling. This novel approach is illustrated by two practical examples: modelling the global carbon cycle in relation to possible climate change; and modelling a horticultural glasshouse for the purposes of automatic climate control system design.  相似文献   

2.
Evolutionary ecology is the study of evolutionary processes, and the ecological conditions that influence them. A fundamental paradigm underlying the study of evolution is natural selection. Although there are a variety of operational definitions for natural selection in the literature, perhaps the most general one is that which characterizes selection as the process whereby heritable variation in fitness associated with variation in one or more phenotypic traits leads to intergenerational change in the frequency distribution of those traits. The past 20 years have witnessed a marked increase in the precision and reliability of our ability to estimate one or more components of fitness and characterize natural selection in wild populations, owing particularly to significant advances in methods for analysis of data from marked individuals. In this paper, we focus on several issues that we believe are important considerations for the application and development of these methods in the context of addressing questions in evolutionary ecology. First, our traditional approach to estimation often rests upon analysis of aggregates of individuals, which in the wild may reflect increasingly non-random (selected) samples with respect to the trait(s) of interest. In some cases, analysis at the aggregate level, rather than the individual level, may obscure important patterns. While there are a growing number of analytical tools available to estimate parameters at the individual level, and which can cope (to varying degrees) with progressive selection of the sample, the advent of new methods does not reduce the need to consider carefully the appropriate level of analysis in the first place. Estimation should be motivated a priori by strong theoretical analysis. Doing so provides clear guidance, in terms of both (i) assisting in the identification of realistic and meaningful models to include in the candidate model set, and (ii) providing the appropriate context under which the results are interpreted. Second, while it is true that selection (as defined) operates at the level of the individual, the selection gradient is often (if not generally) conditional on the abundance of the population. As such, it may be important to consider estimating transition rates conditional on both the parameter values of the other individuals in the population (or at least their distribution), and population abundance. This will undoubtedly pose a considerable challenge, for both single- and multi-strata applications. It will also require renewed consideration of the estimation of abundance, especially for open populations. Thirdly, selection typically operates on dynamic, individually varying traits. Such estimation may require characterizing fitness in terms of individual plasticity in one or more state variables, constituting analysis of the norms of reaction of individuals to variable environments. This can be quite complex, especially for traits that are under facultative control. Recent work has indicated that the pattern of selection on such traits is conditional on the relative rates of movement among and frequency of spatially heterogeneous habitats, suggesting analyses of evolution of life histories in open populations can be misleading in some cases.  相似文献   

3.
Evolutionary ecology is the study of evolutionary processes, and the ecological conditions that influence them. A fundamental paradigm underlying the study of evolution is natural selection. Although there are a variety of operational definitions for natural selection in the literature, perhaps the most general one is that which characterizes selection as the process whereby heritable variation in fitness associated with variation in one or more phenotypic traits leads to intergenerational change in the frequency distribution of those traits. The past 20 years have witnessed a marked increase in the precision and reliability of our ability to estimate one or more components of fitness and characterize natural selection in wild populations, owing particularly to significant advances in methods for analysis of data from marked individuals. In this paper, we focus on several issues that we believe are important considerations for the application and development of these methods in the context of addressing questions in evolutionary ecology. First, our traditional approach to estimation often rests upon analysis of aggregates of individuals, which in the wild may reflect increasingly non-random (selected) samples with respect to the trait(s) of interest. In some cases, analysis at the aggregate level, rather than the individual level, may obscure important patterns. While there are a growing number of analytical tools available to estimate parameters at the individual level, and which can cope (to varying degrees) with progressive selection of the sample, the advent of new methods does not reduce the need to consider carefully the appropriate level of analysis in the first place. Estimation should be motivated a priori by strong theoretical analysis. Doing so provides clear guidance, in terms of both (i) assisting in the identification of realistic and meaningful models to include in the candidate model set, and (ii) providing the appropriate context under which the results are interpreted. Second, while it is true that selection (as defined) operates at the level of the individual, the selection gradient is often (if not generally) conditional on the abundance of the population. As such, it may be important to consider estimating transition rates conditional on both the parameter values of the other individuals in the population (or at least their distribution), and population abundance. This will undoubtedly pose a considerable challenge, for both single- and multi-strata applications. It will also require renewed consideration of the estimation of abundance, especially for open populations. Thirdly, selection typically operates on dynamic, individually varying traits. Such estimation may require characterizing fitness in terms of individual plasticity in one or more state variables, constituting analysis of the norms of reaction of individuals to variable environments. This can be quite complex, especially for traits that are under facultative control. Recent work has indicated that the pattern of selection on such traits is conditional on the relative rates of movement among and frequency of spatially heterogeneous habitats, suggesting analyses of evolution of life histories in open populations can be misleading in some cases.  相似文献   

4.
5.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

6.
I review the use of auxiliary variables in capture-recapture models for estimation of demographic parameters (e.g. capture probability, population size, survival probability, and recruitment, emigration and immigration numbers). I focus on what has been done in current research and what still needs to be done. Typically in the literature, covariate modelling has made capture and survival probabilities functions of covariates, but there are good reasons also to make other parameters functions of covariates as well. The types of covariates considered include environmental covariates that may vary by occasion but are constant over animals, and individual animal covariates that are usually assumed constant over time. I also discuss the difficulties of using time-dependent individual animal covariates and some possible solutions. Covariates are usually assumed to be measured without error, and that may not be realistic. For closed populations, one approach to modelling heterogeneity in capture probabilities uses observable individual covariates and is thus related to the primary purpose of this paper. The now standard Huggins-Alho approach conditions on the captured animals and then uses a generalized Horvitz-Thompson estimator to estimate population size. This approach has the advantage of simplicity in that one does not have to specify a distribution for the covariates, and the disadvantage is that it does not use the full likelihood to estimate population size. Alternately one could specify a distribution for the covariates and implement a full likelihood approach to inference to estimate the capture function, the covariate probability distribution, and the population size. The general Jolly-Seber open model enables one to estimate capture probability, population sizes, survival rates, and birth numbers. Much of the focus on modelling covariates in program MARK has been for survival and capture probability in the Cormack-Jolly-Seber model and its generalizations (including tag-return models). These models condition on the number of animals marked and released. A related, but distinct, topic is radio telemetry survival modelling that typically uses a modified Kaplan-Meier method and Cox proportional hazards model for auxiliary variables. Recently there has been an emphasis on integration of recruitment in the likelihood, and research on how to implement covariate modelling for recruitment and perhaps population size is needed. The combined open and closed 'robust' design model can also benefit from covariate modelling and some important options have already been implemented into MARK. Many models are usually fitted to one data set. This has necessitated development of model selection criteria based on the AIC (Akaike Information Criteria) and the alternative of averaging over reasonable models. The special problems of estimating over-dispersion when covariates are included in the model and then adjusting for over-dispersion in model selection could benefit from further research.  相似文献   

7.
Bayesian graphical modelling: a case-study in monitoring health outcomes   总被引:2,自引:0,他引:2  
Bayesian graphical modelling represents the synthesis of several recent developments in applied complex modelling. After describing a moderately challenging real example, we show how graphical models and Markov chain Monte Carlo methods naturally provide a direct path between model specification and the computational means of making inferences on that model. These ideas are illustrated with a range of modelling issues related to our example. An appendix discusses the BUGS software.  相似文献   

8.
9.
It is common to fit generalized linear models with binomial and Poisson responses, where the data show a variability that is greater than the theoretical variability assumed by the model. This phenomenon, known as overdispersion, may spoil inferences about the model by considering significant parameters associated with variables that have no significant effect on the dependent variable. This paper explains some methods to detect overdispersion and presents and evaluates three well-known methodologies that have shown their usefulness in correcting this problem, using random mean models, quasi-likelihood methods and a double exponential family. In addition, it proposes some new Bayesian model extensions that have proved their usefulness in correcting the overdispersion problem. Finally, using the information provided by the National Demographic and Health Survey 2005, the departmental factors that have an influence on the mortality of children under 5 years and female postnatal period screening are determined. Based on the results, extensions that generalize some of the aforementioned models are also proposed, and their use is motivated by the data set under study. The results conclude that the proposed overdispersion models provide a better statistical fit of the data.  相似文献   

10.
Summary.  We consider the issue of the dynamics of perceptions, as expressed in responses to survey questions on subjective wellbeing. We develop a simulated maximum likelihood method for estimation of dynamic linear models, where the dependent variable is partially observed through ordinal scales. This latent auto-regression model is often more appropriate than the usual state dependence model for attitudinal and interval variables. The paper contains an application to a model of households' perceptions of their financial wellbeing, demonstrating the superior fit of the latent auto-regression model to both the usual static model and the state dependence model.  相似文献   

11.
12.
Statistical Methods & Applications - Economic insecurity has increased in importance in the understanding of economic and socio-demographic household behaviour. The present paper aims to...  相似文献   

13.
14.
15.
环境-经济核算中的环保活动再认识   总被引:1,自引:0,他引:1       下载免费PDF全文
朱力崎 《统计研究》2001,18(5):23-24
传统的GDP指标体系测度的是人类活动对经济产生的总影响 ,包括正面影响与负面影响。这一致命的缺陷使得GDP指标体系无法用来测度可持续发展的福利状况 ,而NDP指标体系也存在这一问题 ,NDP① 忽略了人类活动中对自然资源的利用 ,以及由于污染物排放造成环境恶化带来的损失。 1993年联合国统计委员会设计了环境和经济核算体系(SEEA) ,在NDP指标基础上 ,扣除自然资源投入和生产活动导致的自然环境质量恶化损失 ,得到生态产出EDP② ,通过EDP比较正确的测度了人类生产活动产生的净福利变动情况 ,体现了可持续发展原则。…  相似文献   

16.
Time series modelling of childhood diseases: a dynamical systems approach   总被引:3,自引:0,他引:3  
A key issue in the dynamical modelling of epidemics is the synthesis of complex mathematical models and data by means of time series analysis. We report such an approach, focusing on the particularly well-documented case of measles. We propose the use of a discrete time epidemic model comprising the infected and susceptible class as state variables. The model uses a discrete time version of the susceptible–exposed–infected–recovered type epidemic models, which can be fitted to observed disease incidence time series. We describe a method for reconstructing the dynamics of the susceptible class, which is an unobserved state variable of the dynamical system. The model provides a remarkable fit to the data on case reports of measles in England and Wales from 1944 to 1964. Morever, its systematic part explains the well-documented predominant biennial cyclic pattern. We study the dynamic behaviour of the time series model and show that episodes of annual cyclicity, which have not previously been explained quantitatively, arise as a response to a quicker replenishment of the susceptible class during the baby boom, around 1947.  相似文献   

17.
Two ways of modelling overdispersion in non-normal data   总被引:2,自引:0,他引:2  
For non-normal data assumed to have distributions, such as the Poisson distribution, which have an a priori dispersion parameter, there are two ways of modelling overdispersion: by a quasi-likelihood approach or with a random-effect model. The two approaches yield different variance functions for the response, which may be distinguishable if adequate data are available. The epilepsy data of Thall and Vail and the fabric data of Bissell are used to exemplify the ideas.  相似文献   

18.
Approaches that use the pseudolikelihood to perform multilevel modelling on survey data have been presented in the literature. To avoid biased estimates due to unequal selection probabilities, conditional weights can be introduced at each level. Less-biased estimators can also be obtained in a two-level linear model if the level-1 weights are scaled. In this paper, we studied several level-2 weights that can be introduced into the pseudolikelihood when the sampling design and the hierarchical structure of the multilevel model do not match. Two-level and three-level models were studied. The present work was motivated by a study that aims to estimate the contributions of lead sources to polluting the interior floor dust of the rooms within dwellings. We performed a simulation study using the real data collected from a French survey to achieve our objective. We conclude that it is preferable to use unweighted analyses or, at the most, to use conditional level-2 weights in a two-level or a three-level model. We state some warnings and make some recommendations.  相似文献   

19.
Conjoint choice experiments have become a powerful tool to explore individual preferences. The consistency of respondents' choices depends on the choice complexity. For example, it is easier to make a choice between two alternatives with few attributes than between five alternatives with several attributes. In the latter case it will be much harder to choose the preferred alternative which is reflected in a higher response error. Several authors have dealt with this choice complexity in the estimation stage but very little attention has been paid to set up designs that take this complexity into account. The core issue of this paper is to find out whether it is worthwhile to take this complexity into account in the design stage. We construct efficient semi-Bayesian D-optimal designs for the heteroscedastic conditional logit model which is used to model the across respondent variability that occurs due to the choice complexity. The degree of complexity is measured by the entropy, as suggested by Swait and Adamowicz (2001). The proposed designs are compared with a semi-Bayesian D-optimal design constructed without taking the complexity into account. The simulation study shows that it is much better to take the choice complexity into account when constructing conjoint choice experiments.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号