首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A threshold autoregressive model for wholesale electricity prices   总被引:1,自引:0,他引:1  
Summary.  We introduce a discrete time model for electricity prices which accounts for both transitory spikes and temperature effects. The model allows for different rates of mean reversion: one for weather events, one around price jumps and another for the remainder of the process. We estimate the model by using a Markov chain Monte Carlo approach with 3 years of daily data from Allegheny County, Pennsylvania. We show that our model outperforms existing stochastic jump diffusion models for this data set. Results also demonstrate the importance of model parameters corresponding to both the temperature effect and the multilevel mean reversion rate.  相似文献   

2.
Summary.  Improving educational achievement in UK schools is a priority, and of particular concern is the low achievement of specific groups, such as those from lower socio-economic backgrounds. An obvious question is whether we should be improving the outcomes of these pupils by spending more on their education. The literature on the effect of educational spending on the achievement of pupils has some methodological difficulties, in particular the endogeneity of school resource levels, and the intraschool correlations in pupils' responses. We adopt a multi-level simultaneous equation modelling approach to assess the effect of school resources on pupil attainment at age 14 years. The paper is the first to apply a simultaneous equation model to estimate the effect of school resources on pupils' achievement, using the newly available national pupil database and pupil level annual school census.  相似文献   

3.
We present a systematic approach to the practical and comprehensive handling of missing data motivated by our experiences of analyzing longitudinal survey data. We consider the Health 2000 and 2011 Surveys (BRIF8901) where increased non-response and non-participation from 2000 to 2011 was a major issue. The model assumptions involved in the complex sampling design, repeated measurements design, non-participation mechanisms and associations are presented graphically using methodology previously defined as a causal model with design, i.e. a functional causal model extended with the study design. This tool forces the statistician to make the study design and the missing-data mechanism explicit. Using the systematic approach, the sampling probabilities and the participation probabilities can be considered separately. This is beneficial when the performance of missing-data methods are to be compared. Using data from Health 2000 and 2011 Surveys and from national registries, it was found that multiple imputation removed almost all differences between full sample and estimated prevalences. The inverse probability weighting removed more than half and the doubly robust method 60% of the differences. These findings are encouraging since decreasing participation rates are a major problem in population surveys worldwide.  相似文献   

4.
We propose a flexible model approach for the distribution of random effects when both response variables and covariates have non-ignorable missing values in a longitudinal study. A Bayesian approach is developed with a choice of nonparametric prior for the distribution of random effects. We apply the proposed method to a real data example from a national long-term survey by Statistics Canada. We also design simulation studies to further check the performance of the proposed approach. The result of simulation studies indicates that the proposed approach outperforms the conventional approach with normality assumption when the heterogeneity in random effects distribution is salient.  相似文献   

5.
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
Biomarkers have the potential to improve our understanding of disease diagnosis and prognosis. Biomarker levels that fall below the assay detection limits (DLs), however, compromise the application of biomarkers in research and practice. Most existing methods to handle non-detects focus on a scenario in which the response variable is subject to the DL; only a few methods consider explanatory variables when dealing with DLs. We propose a Bayesian approach for generalized linear models with explanatory variables subject to lower, upper, or interval DLs. In simulation studies, we compared the proposed Bayesian approach to four commonly used methods in a logistic regression model with explanatory variable measurements subject to the DL. We also applied the Bayesian approach and other four methods in a real study, in which a panel of cytokine biomarkers was studied for their association with acute lung injury (ALI). We found that IL8 was associated with a moderate increase in risk for ALI in the model based on the proposed Bayesian approach.  相似文献   

7.
Since Sen's (1976) paper on poverty measurement, a substantial literature, both theoretical and empirical, has emerged. There have been several recent efforts to drive poverty measures based on different approaches and axioms. These poverty indices are based on head count ratio, poverty gaps and distribution of income. These are very narrow in approach and suffer from several drawbacks. However, the purpose of the present paper is to introduce a new poverty measure based on a holistic and system modelling approach. Based on Chopra's human contestability (Chopra, 2003, 2007) approach to poverty, this new approach to measuring poverty has been developed using a structure equation model based on Kanji's business excellence model (Kanji, 2002) to create the proposed poverty model. We construct a latent variable structural equation model to measure the contestability excellence within certain boundaries of the societal system. It will provide us with a measurement of poverty in a society or community in terms of human contestability. A higher human contestability index will indicate the lower poverty within the society. Strengths and weakness as of various components will also indicate that a characteristic of the individual requires extra society or government support to remove poverty. However, there remains considerable disagreement on the best way to achieve this.  相似文献   

8.
We analyze the multivariate spatial distribution of plant species diversity, distributed across three ecologically distinct land uses, the urban residential, urban non-residential, and desert. We model these data using a spatial generalized linear mixed model. Here plant species counts are assumed to be correlated within and among the spatial locations. We implement this model across the Phoenix metropolis and surrounding desert. Using a Bayesian approach, we utilized the Langevin–Hastings hybrid algorithm. Under a generalization of a spatial log-Gaussian Cox model, the log-intensities of the species count processes follow Gaussian distributions. The purely spatial component corresponding to these log-intensities are jointly modeled using a cross-convolution approach, in order to depict a valid cross-correlation structure. We observe that this approach yields non-stationarity of the model ensuing from different land use types. We obtain predictions of various measures of plant diversity including plant richness and the Shannon–Weiner diversity at observed locations. We also obtain a prediction framework for plant preferences in urban and desert plots.  相似文献   

9.
Repeated neuropsychological measurements, such as mini-mental state examination (MMSE) scores, are frequently used in Alzheimer’s disease (AD) research to study change in cognitive function of AD patients. A question of interest among dementia researchers is whether some AD patients exhibit transient “plateaus” of cognitive function in the course of the disease. We consider a statistical approach to this question, based on irregularly spaced repeated MMSE scores. We propose an algorithm that formalizes the measurement of an apparent cognitive plateau, and a procedure to evaluate the evidence of plateaus in AD using this algorithm based on applying the algorithm to the observed data and to data sets simulated from a linear mixed model. We apply these methods to repeated MMSE data from the Michigan Alzheimer’s Disease Research Center, finding a high rate of apparent plateaus and also a high rate of false discovery. Simulation studies are also conducted to assess the performance of the algorithm. In general, the false discovery rate of the algorithm is high unless the rate of decline is high compared with the measurement error of the cognitive test. It is argued that the results are not a problem of the specific algorithm chosen, but reflect a lack of information concerning the presence of plateaus in the data.  相似文献   

10.
我国宏观金融风险测度研究   总被引:4,自引:1,他引:3  
文章根据统计指标体系的构建原则,建立了由宏观经济环境、银行呆坏账、泡沫、国债和外资冲击在内的宏观金融风险测度指标体系,利用映射法原则将原始指标区间化。然后根据层次分析法确定风险权重,进而构造了我国宏观金融风险测度的理论模型。  相似文献   

11.
The optimal strategies for a long-term static investor are studied. Given a portfolio of a stock and a bond, we derive the optimal allocation of the capitals to maximize the expected long-term growth rate of a utility function of the wealth. When the bond has a constant interest rate, three models for the underlying stock price processes are studied: Heston model, 3/2 model, and jump diffusion model. We also study the optimal strategies for a portfolio in which the stock price process follows a Black-Scholes model and the bond process has a Vasicek interest rate that is correlated to the stock price.  相似文献   

12.
Interval-censored data arise due to a sequence random examination such that the failure time of interest occurs in an interval. In some medical studies, there exist long-term survivors who can be considered as permanently cured. We consider a mixed model for the uncured group coming from linear transformation models and cured group coming from a logistic regression model. For the inference of parameters, an EM algorithm is developed for a full likelihood approach. To investigate finite sample properties of the proposed method, simulation studies are conducted. The approach is applied to the National Aeronautics and Space Administration’s hypobaric decompression sickness data.  相似文献   

13.
We present a Bayesian analysis of a piecewise linear model constructed by using basis functions which generalizes the univariate linear spline to higher dimensions. Prior distributions are adopted on both the number and the locations of the splines, which leads to a model averaging approach to prediction with predictive distributions that take into account model uncertainty. Conditioning on the data produces a Bayes local linear model with distributions on both predictions and local linear parameters. The method is spatially adaptive and covariate selection is achieved by using splines of lower dimension than the data.  相似文献   

14.
Traditionally, analysis of Hydrology employs only one hydrological variable. Recently, Nadarajah [A bivariate distribution with gamma and beta marginals with application to drought data. J Appl Stat. 2009;36:277–301] proposed a bivariate model with gamma and beta as marginal distributions to analyse the drought duration and the proportion of drought events. However, the validity of this method hinges on fulfilment of stringent assumptions. We propose a robust likelihood approach which can be used to make inference for general bivariate continuous and proportion data. Unlike the gamma–beta (GB) model which is sensitive to model misspecification, the new method provides legitimate inference without knowing the true underlying distribution of the bivariate data. Simulations and the analysis of the drought data from the State of Nebraska, USA, are provided to make contrasts between this robust approach and the GB model.  相似文献   

15.
Most of the technological innovation diffusion follows an S-shaped curve. But, in many practical situations this may not hold true. To this end, Weibull model was proposed to capture the diffusion of new technological innovation, which does not follow any specific pattern. Nonlinear growth models play a very important role in getting an insight into the underlying mechanism. These models are generally ‘mechanistic’ as the parameters have meaningful interpretation. The nonlinear method of estimation of parameters of Weibull model fails to converge. Taking this problem into consideration, we propose the use of a powerful technique of genetic algorithm for parameter estimation. The methodology is also validated by simulation study to check whether parameter estimates are closer to the real value. For illustration purpose, we model the tractor density time-series data of India as a whole and some major states of India. It is seen that fitted Weibull model is able to capture the technology diffusion process in a reasonable manner. Further, comparison is also made with Logistic and Gompertz model; and is found to perform better for the data sets under consideration.  相似文献   

16.
Censoring of a longitudinal outcome often occurs when data are collected in a biomedical study and where the interest is in the survival and or longitudinal experiences of a study population. In the setting considered herein, we encountered upper and lower censored data as the result of restrictions imposed on measurements from a kinetic model producing “biologically implausible” kidney clearances. The goal of this paper is to outline the use of a joint model to determine the association between a censored longitudinal outcome and a time to event endpoint. This paper extends Guo and Carlin's [6] paper to accommodate censored longitudinal data, in a commercially available software platform, by linking a mixed effects Tobit model to a suitable parametric survival distribution. Our simulation results showed that our joint Tobit model outperforms a joint model made up of the more naïve or “fill-in” method for the longitudinal component. In this case, the upper and/or lower limits of censoring are replaced by the limit of detection. We illustrated the use of this approach with example data from the hemodialysis (HEMO) study [3] and examined the association between doubly censored kidney clearance values and survival.  相似文献   

17.
We propose a class of state-space models for multivariate longitudinal data where the components of the response vector may have different distributions. The approach is based on the class of Tweedie exponential dispersion models, which accommodates a wide variety of discrete, continuous and mixed data. The latent process is assumed to be a Markov process, and the observations are conditionally independent given the latent process, over time as well as over the components of the response vector. This provides a fully parametric alternative to the quasilikelihood approach of Liang and Zeger. We estimate the regression parameters for time-varying covariates entering either via the observation model or via the latent process, based on an estimating equation derived from the Kalman smoother. We also consider analysis of residuals from both the observation model and the latent process.  相似文献   

18.
Abstract

We propose a novel approach to estimate the Cox model with temporal covariates. Our new approach treats the temporal covariates as arising from a longitudinal process which is modeled jointly with the event time. Different from the literature, the longitudinal process in our model is specified as a bounded variational process and determined by a family of Initial Value Problems associated with an Ordinary Differential Equation. Our specification has the advantage that only the observation of the temporal covariates at the event-time and the event-time itself are needed to fit the model, while it is fine but not necessary to have more longitudinal observations. This fact makes our approach very useful for many medical outcome datasets, such as the SPARCS and NIS, where it is important to find the hazard rate of being discharged given the accumulative cost but only the total cost at the discharge time is available due to the protection of private information. Our estimation procedure is based on maximizing the full information likelihood function. The resulting estimators are shown to be consistent and asymptotically normally distributed. Simulations and a real example illustrate the utility of the proposed model. Finally, a couple of extensions are discussed.  相似文献   

19.
This paper provides a statistically unified method for modelling trends in groundwater levels for a national project that aims to predict areas at risk from salinity in 2020. It was necessary to characterize the trends in groundwater levels in thousands of boreholes that have been monitored by Agriculture Western Australia throughout the south-west of Western Australia over the last 10 years. The approach investigated in the present paper uses segmented regression with constraints when the number of change points is unknown. For each segment defined by change points, the trend can be described by a linear trend possibly superimposed on a periodic response. Four different types of change point are defined by constraints on the model parameters to cope with different patterns of change in groundwater levels. For a set of candidate change points provided by the user, a modified Akaike information criterion is used for model selection. Model parameters can be estimated by multiple linear regression. Some typical examples are presented to demonstrate the performance of the approach.  相似文献   

20.
We consider the problem of selecting a regression model from a large class of possible models in the case where no true model is believed to exist. In practice few statisticians, or scientists who employ statistical methods, believe that a "true" model exists, but nonetheless they seek to select a model as a proxy from which they want to predict. Unlike much of the recent work in this area we address this problem explicitly. We develop Bayesian predictive model selection techniques when proper conjugate priors are used and obtain an easily computed expression for the model selection criterion. We also derive expressions for updating the value of the statistic when a predictor is dropped from the model and apply this approach to a large well-known data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号