首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Nonstationary time series are frequently detrended in empirical investigations by regressing the series on time or a function of time. The effects of the detrending on the tests for causal relationships in the sense of Granger are investigated using quarterly U.S. data. The causal relationships between nominal or real GNP and M1, inferred from the Granger–Sims tests, are shown to depend very much on, among other factors, whether or not series are detrended. Detrending tends to remove or weaken causal relationships, and conversely, failure to detrend tends to introduce or enhance causal relationships. The study suggests that we need a more robust test or a better definition of causality.  相似文献   

2.
A challenge for implementing performance-based Bayesian sample size determination is selecting which of several methods to use. We compare three Bayesian sample size criteria: the average coverage criterion (ACC) which controls the coverage rate of fixed length credible intervals over the predictive distribution of the data, the average length criterion (ALC) which controls the length of credible intervals with a fixed coverage rate, and the worst outcome criterion (WOC) which ensures the desired coverage rate and interval length over all (or a subset of) possible datasets. For most models, the WOC produces the largest sample size among the three criteria, and sample sizes obtained by the ACC and the ALC are not the same. For Bayesian sample size determination for normal means and differences between normal means, we investigate, for the first time, the direction and magnitude of differences between the ACC and ALC sample sizes. For fixed hyperparameter values, we show that the difference of the ACC and ALC sample size depends on the nominal coverage, and not on the nominal interval length. There exists a threshold value of the nominal coverage level such that below the threshold the ALC sample size is larger than the ACC sample size, and above the threshold the ACC sample size is larger. Furthermore, the ACC sample size is more sensitive to changes in the nominal coverage. We also show that for fixed hyperparameter values, there exists an asymptotic constant ratio between the WOC sample size and the ALC (ACC) sample size. Simulation studies are conducted to show that similar relationships among the ACC, ALC, and WOC may hold for estimating binomial proportions. We provide a heuristic argument that the results can be generalized to a larger class of models.  相似文献   

3.
To learn about the progression of a complex disease, it is necessary to understand the physiology and function of many genes operating together in distinct interactions as a system. In order to significantly advance our understanding of the function of a system, we need to learn the causal relationships among its modeled genes. To this end, it is desirable to compare experiments of the system under complete interventions of some genes, e.g., knock-out of some genes, with experiments of the system without interventions. However, it is expensive and difficult (if not impossible) to conduct wet lab experiments of complete interventions of genes in animal models, e.g., a mouse model. Thus, it will be helpful if we can discover promising causal relationships among genes with observational data alone in order to identify promising genes to perturb in the system that can later be verified in wet laboratories. While causal Bayesian networks have been actively used in discovering gene pathways, most of the algorithms that discover pairwise causal relationships from observational data alone identify only a small number of significant pairwise causal relationships, even with a large dataset. In this article, we introduce new causal discovery algorithms—the Equivalence Local Implicit latent variable scoring Method (EquLIM) and EquLIM with Markov chain Monte Carlo search algorithm (EquLIM-MCMC)—that identify promising causal relationships even with a small observational dataset.  相似文献   

4.
张凌翔 《统计研究》2014,31(6):107-112
本文讨论了六种信息准则在STAR模型滞后阶数选择中的适应性及稳健性问题。Monte Carlo模拟结果显示,在多数情况下,数据生成过程中的误差项分布并不影响信息准则正确识别模型最大滞后阶数的能力;对于短STAR模型,ACC准则具有较高的正确识别率,并且对不同平滑转移系数及不同门限值具有很好的稳健性;而对于长STAR模型,SC准则及ACC准则具有更高的正确率及良好的稳健性。  相似文献   

5.
Causal inference approaches in systems genetics exploit quantitative trait loci (QTL) genotypes to infer causal relationships among phenotypes. The genetic architecture of each phenotype may be complex, and poorly estimated genetic architectures may compromise the inference of causal relationships among phenotypes. Existing methods assume QTLs are known or inferred without regard to the phenotype network structure. In this paper we develop a QTL-driven phenotype network method (QTLnet) to jointly infer a causal phenotype network and associated genetic architecture for sets of correlated phenotypes. Randomization of alleles during meiosis and the unidirectional influence of genotype on phenotype allow the inference of QTLs causal to phenotypes. Causal relationships among phenotypes can be inferred using these QTL nodes, enabling us to distinguish among phenotype networks that would otherwise be distribution equivalent. We jointly model phenotypes and QTLs using homogeneous conditional Gaussian regression models, and we derive a graphical criterion for distribution equivalence. We validate the QTLnet approach in a simulation study. Finally, we illustrate with simulated data and a real example how QTLnet can be used to infer both direct and indirect effects of QTLs and phenotypes that co-map to a genomic region.  相似文献   

6.
Singular spectrum analysis (SSA) is an increasingly popular and widely adopted filtering and forecasting technique which is currently exploited in a variety of fields. Given its increasing application and superior performance in comparison to other methods, it is pertinent to study and distinguish between the two forecasting variations of SSA. These are referred to as Vector SSA (SSA-V) and Recurrent SSA (SSA-R). The general notion is that SSA-V is more robust and provides better forecasts than SSA-R. This is especially true when faced with time series which are non-stationary and asymmetric, or affected by unit root problems, outliers or structural breaks. However, currently there exists no empirical evidence for proving the above notions or suggesting that SSA-V is better than SSA-R. In this paper, we evaluate out-of-sample forecasting capabilities of the optimised SSA-V and SSA-R forecasting algorithms via a simulation study and an application to 100 real data sets with varying structures, to provide a statistically reliable answer to the question of which SSA algorithm is best for forecasting at both short and long run horizons based on several important criteria.  相似文献   

7.
This study investigates causal structure among daily Chicago Board of Trade corn futures prices and seven regional cash series from Iowa, Illinois, Indiana, Ohio, Minnesota, Nebraska, and Kansas for January 2006–March 2011. Their wavelet transformed series are further analyzed for causal relationships at different time scales. Empirical results indicate no causality among states or between the futures and a cash series for time scales shorter than one month. As scales increase but do not exceed a year, bidirectional causal flows are determined among all prices. The information leadership role of the futures against a cash price is identified for the scale longer than one year and raw series, at which no interstate causality is found.  相似文献   

8.
There are two main parameters in Singular Spectrum Analysis (SSA). The aim of this study is to determine whether the optimal values of these parameters are different for reconstruction and forecasting stages, and if those are worth the extra computational effort and time which they require. Here, we evaluate these issues using simulation study.  相似文献   

9.
In singular spectrum analysis (SSA) window length is a critical tuning parameter that must be assigned by the practitioner. This paper provides a theoretical analysis of signal–noise separation and time series reconstruction in SSA that can serve as a guide to optimal window choice. We establish numerical bounds on the mean squared reconstruction error and present their almost sure limits under very general regularity conditions on the underlying data generating mechanism. We also provide asymptotic bounds for the mean squared separation error. Evidence obtained using simulation experiments and real data sets indicates that the theoretical properties are reflected in observed behaviour, even in relatively small samples, and the results indicate how, in practice, an optimal assignment for the window length can be made.  相似文献   

10.
ABSTRACT

Singular spectrum analysis (SSA) is a relatively new method for time series analysis and comes as a non-parametric alternative to the classical methods. This methodology has proven to be effective in analysing non-stationary and complex time series since it is a non-parametric method and do not require the classical assumptions over the stationarity or over the normality of the residuals. Although SSA have proved to provide advantages over traditional methods, the challenges that arise when long time series are considered, make the standard SSA very demanding computationally and often not suitable. In this paper we propose the randomized SSA which is an alternative to SSA for long time series without losing the quality of the analysis. The SSA and the randomized SSA are compared in terms of quality of the model fit and forecasting, and computational time. This is done by using Monte Carlo simulations and real data about the daily prices of five of the major world commodities.  相似文献   

11.
In psoriatic arthritis, permanent joint damage characterizes disease progression and represents a major debilitating aspect of the disease. Understanding the process of joint damage will assist in the treatment and disease management of patients. Multistate models provide a means to examine patterns of disease, such as symmetric joint damage. Additionally, the link between damage and the dynamic course of disease activity (represented by joint swelling and stress pain) at both the individual joint level and otherwise can be represented within a correlated multistate model framework. Correlation is reflected through the use of random effects for progressive models and robust variance estimation for non-progressive models. Such analyses, undertaken with data from a large psoriatic arthritis cohort, are discussed and the extent to which they permit causal reasoning is considered. For this, emphasis is given to the use of the Bradford Hill criteria for causation in observational studies and the concept of local (in)dependence to capture the dynamic nature of the relationships.  相似文献   

12.
In this study, we consider the causality test for the integer-valued time series. Using the mean equation of Poisson INGARCH models, we construct a regression that includes exogenous variables. The test is then constructed based on the least squares estimator and is shown to follow a chi-square distribution under the null of no causal relationships. A simulation study and real data analysis using the crime and temperature data in Chicago are provided for illustration.  相似文献   

13.
Criterion is essential for measuring the goodness of an experimental design. In this paper, lower bounds of various criteria in experimental designs will be reviewed according to methodology of their construction. The criteria include most well-known ones which are frequently used as benchmarks for orthogonal array, uniform design, supersaturated design and other types of designs. To derive the lower bounds of these criteria, five different approaches are explored. Some new results are given. Throughout the paper, some relationships among different types of lower bounds are also discussed.  相似文献   

14.
This paper focuses on a situation in which a set of treatments is associated with a response through a set of supplementary variables in linear models as well as discrete models. Under the situation, we demonstrate that the causal effect can be estimated more accurately from the set of supplementary variables. In addition, we show that the set of supplementary variables can include selection variables and proxy variables as well. Furthermore, we propose selection criteria for supplementary variables based on the estimation accuracy of causal effects. From graph structures based on our results, we can judge certain situations under which the causal effect can be estimated more accurately by supplementary variables and reliably evaluate the causal effects from observed data.  相似文献   

15.
The problem of forecasting a time series by using information provided by a second time series is considered. Two multivariate extensions of Singular Spectrum Analysis (SSA) are compared in terms of forecast error: Horizontal Multi-channel SSA and Stepwise Common SSA. Different signal structures, defined in terms of trend, period, amplitude and phase, are investigated. In broad terms we find that neither Horizontal Multichannel SSA nor Stepwise Common SSA is best in all cases. Horizontal MSSA is outperformed particularly in cases where different trends are considered.  相似文献   

16.
In binary regression, imbalanced data result from the presence of values equal to zero (or one) in a proportion that is significantly greater than the corresponding real values of one (or zero). In this work, we evaluate two methods developed to deal with imbalanced data and compare them to the use of asymmetric links. The results based on simulation study show, that correction methods do not adequately correct bias in the estimation of regression coefficients and that the models with power links and reverse power considered produce better results for certain types of imbalanced data. Additionally, we present an application for imbalanced data, identifying the best model among the various ones proposed. The parameters are estimated using a Bayesian approach, considering the Hamiltonian Monte-Carlo method, utilizing the No-U-Turn Sampler algorithm and the comparisons of models were developed using different criteria for model comparison, predictive evaluation and quantile residuals.  相似文献   

17.
Drug switchability requires the evidence of individual bioequivalence which -refers to the comparison of the closeness between the two distributions of the pharmacokinetic (PK) responses from the same subject obtained under the repeated administrations of the test and reference formulations. Advantages and drawbacks of the current statistical procedures for assessment of individual bioequivalence are discussed with emphasis on the aggregate-based criteria, An intersection-union test based on disaggregate criteria is proposed for the evaluation of individual bioequivalence. In addition, a modified aggregated criterion is suggested to overcome the drawbacks suffered by aggregate criteria. The relationships among different criteria are examined, and the performance of the procedures will be compared. A numerical example is given to illustrate the proposed procedures.  相似文献   

18.
Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non‐inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
This paper is an applied analysis of the causal structure of linear multi-equational econometric models. Its aim is to identify the kind of relationships linking the endogenous variables of the model, distinguishing between causal links and feedback loops. The investigation is first carried out within a deterministic framework and then moves on to show how the results may change inside a more realistic stochastic context. The causal analysis is then specifically applied to a linear simultaneous equation model explaining fertility rates. The analysis is carried out by means of a specific RATS programming code designed to show the specific nature of the relationships within the model.  相似文献   

20.
is shown that investment under financing constraints can be modeled as a one-sided deviation from a frictionless investment level, and that effects of financing constraints can be identified and quantified by imposing a distributional assumption on the effects. Panel data on Taiwanese manufacturing firms between 1989 and 1996 are used in the estimation. It is found that (1) some of the sorting criteria used in the literature do not have significant and monotonic relationships with the degrees of financing constraint, resulting in problematic sample separations, and (2) the effects of financial liberalization in Taiwan are such that the investment efficiency improved over time for a typical firm, and the improvement was particularly large for smaller firms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号