首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
In this article, a novel hybrid method to forecast stock price is proposed. This hybrid method is based on wavelet transform, wavelet denoising, linear models (autoregressive integrated moving average (ARIMA) model and exponential smoothing (ES) model), and nonlinear models (BP Neural Network and RBF Neural Network). The wavelet transform provides a set of better-behaved constitutive series than stock series for prediction. Wavelet denoising is used to eliminate some slight random fluctuations of stock series. ARIMA model and ES model are used to forecast the linear component of denoised stock series, and then BP Neural Network and RBF Neural Network are developed as tools for nonlinear pattern recognition to correct the estimation error of the prediction of linear models. The proposed method is examined in the stock market of Shanghai and Shenzhen and the results are compared with some of the most recent stock price forecast methods. The results show that the proposed hybrid method can provide a considerable improvement for the forecasting accuracy. Meanwhile, this proposed method can also be applied to analysis and forecast reliability of products or systems and improve the accuracy of reliability engineering.  相似文献   

2.
3.
《Serials Review》2012,38(4):219-226
Abstract

This study uses systematic random sampling to compare the content of “Beall’s List of Predatory Journals and Publishers” and “Cabell’s Blacklist” of journals. The Beall’s List data was generated from its new site that maintains a new list besides the original list. It found that 28.5% Beall’s List sample publishers are out of business, some Cabell’s Blacklist journals have become ceased. The main takeaway is that among the Beall’s List sample publishers with a working website for journal publishing, only 31.8% can be found on Cabell’s Blacklist.  相似文献   

4.
5.
When the null hypothesis of Friedman’s test is rejected, there is a wide variety of multiple comparisons that can be used to determine which treatments differ from each other. We will discuss the contexts where different multiple comparisons should be applied, when the population follows some discrete distributions commonly used to model count data in biological and ecological fields. Our simulation study shows that sign test is very conservative. Fisher’s LSD and Tukey’s HSD tests computed with ranks are the most liberal. Theoretical considerations are illustrated with data of the Azores Buzzard (Buteo buteo rothschildi) population from Azores, Portugal.  相似文献   

6.
Baker (2008 Baker, R. (2008). An order-statistics-based method for constructing multivariate distributions with fixed marginals. Journal of Multivariate Analysis 99: 23122327.[Crossref], [Web of Science ®] [Google Scholar]) introduced a new method for constructing multivariate distributions with given marginals based on order statistics. In this paper, we provide a test of independence for a pair of absolutely continuous random variables (X, Y) jointly distributed according to Baker’s bivariate distributions. Our purpose is to test the hypothesis that X and Y are independent versus the alternative that X and Y are positively (negatively) quadrant dependent. The asymptotic distribution of the proposed test statistic is investigated. Also, the powers of the proposed test and the class of distribution-free tests proposed by Kochar and Gupta (1987 Kochar, S. G., Gupta, R. P. (1987). Competitors of Kendall-tau test for testing independence against positive quadrant dependence. Biometrika 74(3): 664666.[Crossref], [Web of Science ®] [Google Scholar]) are compared empirically via a simulation study.  相似文献   

7.
Dementia caused by Alzheimer’s disease (AD) is worldwide one of the main medical and social challenges for the next years and decades. An automated analysis of changes in the electroencephalogram (EEG) of patients with AD may contribute to improving the quality of medical diagnoses. In this paper, measures based on uni- and multi-variate spectral densities are studied in order to measure slowing and, in greater detail, reduced synchrony in the EEG signals. Hereby, an EEG segment is interpreted as sample of a (weakly) stationary stochastic process. The spectral density was computed using an indirect estimator. Slowing was considered by calculating the spectral power in predefined frequency bands. As measures for synchrony between single EEG signals, we analyzed coherences, partial coherences, bivariate and conditional Granger causality; for measuring synchrony between groups of EEG signals, we considered coherences, partial coherences, bivariate and conditional Granger causality between the respective first principal components of each group, and dynamic canonic correlations. As measure for local synchrony within a group, the amount of variance explained by the respective first principal component of static and dynamic principal component analysis was investigated. These measures were exemplarily computed for resting state EEG recordings from 83 subjects diagnosed with probable AD. Here, the severity of AD is quantified by the Mini Mental State Examination score.  相似文献   

8.
The analysis highlights the relationship between the territorial distribution of foreigners in Lombardy and the economic space they occupy, territorially defined by the Local Labour Systems. In particular, using these units the study assesses the importance of ethnic networks and local labour market conditions in defining the varying concentration of foreign nationalities living in the territory.The research was partially supported by grants from CNR (99.01522.CT10). Preliminary findings have been presented at SIS (Societá Italiana di Statistica) Annual Meeting, Milano Bicocca, 2002  相似文献   

9.
10.
Let {xij(1 ? j ? ni)|i = 1, 2, …, k} be k independent samples of size nj from respective distributions of functions Fj(x)(1 ? j ? k). A classical statistical problem is to test whether these k samples came from a common distribution function, F(x) whose form may or may not be known. In this paper, we consider the complementary problem of estimating the distribution functions suspected to be homogeneous in order to improve the basic estimator known as “empirical distribution function” (edf), in an asymptotic setup. Accordingly, we consider four additional estimators, namely, the restricted estimator (RE), the preliminary test estimator (PTE), the shrinkage estimator (SE), and the positive rule shrinkage estimator (PRSE) and study their characteristic properties based on the mean squared error (MSE) and relative risk efficiency (RRE) with tables and graphs. We observed that for k ? 4, the positive rule SE performs uniformly better than both shrinkage and the unrestricted estimator, while PTEs works reasonably well for k < 4.  相似文献   

11.
A generalized Holm’s procedure is proposed which can reject several null hypotheses at each step sequentially and also strongly controls the family-wise error rate regardless of the dependence of individual test statistics. The new procedure is more powerful than Holm’s procedure if the number of rejections m and m > 0 is prespecified before the test.  相似文献   

12.
We reconsider the derivation of Blest’s (2003) skewness adjusted version of the classical moment-based coefficient of kurtosis and propose an adaptation of it which generally eliminates the effects of asymmetry a little more successfully. Lower bounds are provided for the two skewness adjusted kurtosis moment measures as functions of the classical coefficient of skewness. The results from a Monte Carlo experiment designed to investigate the sampling properties of numerous moment-based estimators of the two skewness adjusted kurtosis measures are used to identify those estimators with lowest mean squared error for small to medium sized samples drawn from distributions with varying levels of asymmetry and tailweight.  相似文献   

13.

This article provides a concise overview of the main mathematical theory of Benford’s law in a form accessible to scientists and students who have had first courses in calculus and probability. In particular, one of the main objectives here is to aid researchers who are interested in applying Benford’s law, and need to understand general principles clarifying when to expect the appearance of Benford’s law in real-life data and when not to expect it. A second main target audience is students of statistics or mathematics, at all levels, who are curious about the mathematics underlying this surprising and robust phenomenon, and may wish to delve more deeply into the subject. This survey of the fundamental principles behind Benford’s law includes many basic examples and theorems, but does not include the proofs or the most general statements of the theorems; rather it provides precise references where both may be found.

  相似文献   

14.
Recently, conditional Renyi’s divergence of order α and Kerridge’s inaccuracy measures are studied by Navarro et al. (2014 Navarro, J., Sunoj, S.M., Linu, M.N. (2014). Characterizations of bivariate models using some dynamic conditional information divergence measures. Commun. Stat. Theory Methods 43:19391948.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). In the present article, a generalized dynamic conditional Kerridge’s inaccuracy measure is introduced, which can be represented as the sum of conditional Renyi’s divergence and Renyi’s entropy. Some useful bounds are obtained using the concept of likelihood ratio order. The results are extended to weighted distributions. Sufficient conditions are obtained for the monotonicity of the proposed measure. Characterizations for bivariate exponential conditional distribution are presented based on the proposed measure.  相似文献   

15.
16.
The primary purpose of sampling inspection is the protection of consumer’s interests. Although under simple cost models, sampling inspection never serves the producer’s interest, some form of sampling inspection can be beneficial to the consumer under the same assumptions. We consider the case of isolated lot inspection and examine the consumer risk, economic sample design, and errors in the inspection process. Acceptance sampling is shown to be cost-effective to the consumer whenever the lot quality is less than perfect, and even for perfect lot quality in the presence of inspection errors.  相似文献   

17.
Abstract

Technical services staff, along with programmers, supervisors, and frontline librarians, participate in all sorts of systems. Whether they recognize it or not, they are used to interacting with the world through the lens of the systems they work with. In this presentation from the North Carolina Serials Conference, Andreas Orphanides looks at some of the challenges of interacting with the world in terms of systems, discusses the human costs of failing to recognize the limitations of systems, and provides a framework for thinking about systems to help ensure that our systems respect the humanity of their human participants.  相似文献   

18.
《Serials Review》2012,38(4):245-247
Abstract

For over 20 years, Scientific Electronic Library Online (SciELO) has been engaged in an ambitious program of open access journal publishing. In Brazil and 15 other countries, society journals and other publications rely on SciELO for online publishing infrastructure and a platform that gives visibility to a global audience. SciELO’s network of national level collections has expanded to 16 countries and provides an alternative model to approaches centered around commercial publishers or new open access journals.  相似文献   

19.
Fisher succeeded early on in redefining Student’s t-distribution in geometrical terms on a central hypersphere. Intriguingly, a noncentral analytical extension for this fundamental Fisher–Student’s central hypersphere h-distribution does not exist. We therefore set to derive the noncentral h-distribution and use it to graphically illustrate the limitations of the Neyman–Pearson null hypothesis significance testing framework and the strengths of the Bayesian statistical hypothesis analysis framework on the hypersphere polar axis, a compact nontrivial one-dimensional parameter space. Using a geometrically meaningful maximal entropy prior, we requalify the apparent failure of an important psychological science reproducibility project. We proceed to show that the Bayes factor appropriately models the two-sample t-test p-value density of a gene expression profile produced by the high-throughput genomic-scale microarray technology, and provides a simple expression for a local false discovery rate addressing the multiple hypothesis testing problem brought about by such a technology.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号