首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we use simulated data to investigate the power of different causality tests in a two-dimensional vector autoregressive (VAR) model. The data are presented in a nonlinear environment that is modelled using a logistic smooth transition autoregressive function. We use both linear and nonlinear causality tests to investigate the unidirection causality relationship and compare the power of these tests. The linear test is the commonly used Granger causality F test. The nonlinear test is a non-parametric test based on Baek and Brock [A general test for non-linear Granger causality: Bivariate model. Tech. Rep., Iowa State University and University of Wisconsin, Madison, WI, 1992] and Hiemstra and Jones [Testing for linear and non-linear Granger causality in the stock price–volume relation, J. Finance 49(5) (1994), pp. 1639–1664]. When implementing the nonlinear test, we use separately the original data, the linear VAR filtered residuals, and the wavelet decomposed series based on wavelet multiresolution analysis. The VAR filtered residuals and the wavelet decomposition series are used to extract the nonlinear structure of the original data. The simulation results show that the non-parametric test based on the wavelet decomposition series (which is a model-free approach) has the highest power to explore the causality relationship in nonlinear models.  相似文献   

2.
ABSTRACT

We propose a semiparametric approach to estimate the existence and location of a statistical change-point to a nonlinear multivariate time series contaminated with an additive noise component. In particular, we consider a p-dimensional stochastic process of independent multivariate normal observations where the mean function varies smoothly except at a single change-point. Our approach involves conducting a Bayesian analysis on the empirical detail coefficients of the original time series after a wavelet transform. If the mean function of our time series can be expressed as a multivariate step function, we find our Bayesian-wavelet method performs comparably with classical parametric methods such as maximum likelihood estimation. The advantage of our multivariate change-point method is seen in how it applies to a much larger class of mean functions that require only general smoothness conditions.  相似文献   

3.
4.
In a life-testing problem, it may be of interest to investigate the number of observations close to but greater than the median, minimum, or more generally, the ith progressively Type-II censored order statistic (PCOS-II). In this paper, we derive the probability mass and distribution functions of the number of observations greater than the ith PCOS-II for a system with identical components for the cases of independent as well as dependent components. The type of dependence considered among the component lifetimes is through an Archimedean copula. We also provide a goodness-of-fit method for determining the best copula model for a given PCOS-II. Finally, an example is provided to illustrate the results developed here.  相似文献   

5.
We extend four tests common in classical regression – Wald, score, likelihood ratio and F tests – to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.  相似文献   

6.
ABSTRACT

In a load-sharing system, the failure of a component affects the residual lifetime of the surviving components. We propose a model for the load-sharing phenomenon in k-out-of-m systems. The model is based on exponentiated conditional distributions of the order statistics formed by the failure times of the components. For an illustration, we consider two component parallel systems with the initial lifetimes of the components having Weibull and linear failure rate distributions. We analyze one data set to show that the proposed model may be a better fit than the model based on sequential order statistics.  相似文献   

7.
ABSTRACT

Fisher's linear discriminant analysis (FLDA) is known as a method to find a discriminative feature space for multi-class classification. As a theory of extending FLDA to an ultimate nonlinear form, optimal nonlinear discriminant analysis (ONDA) has been proposed. ONDA indicates that the best theoretical nonlinear map for maximizing the Fisher's discriminant criterion is formulated by using the Bayesian a posterior probabilities. In addition, the theory proves that FLDA is equivalent to ONDA when the Bayesian a posterior probabilities are approximated by linear regression (LR). Due to some limitations of the linear model, there is room to modify FLDA by using stronger approximation/estimation methods. For the purpose of probability estimation, multi-nominal logistic regression (MLR) is more suitable than LR. Along this line, in this paper, we develop a nonlinear discriminant analysis (NDA) in which the posterior probabilities in ONDA are estimated by MLR. In addition, in this paper, we develop a way to introduce sparseness into discriminant analysis. By applying L1 or L2 regularization to LR or MLR, we can incorporate sparseness in FLDA and our NDA to increase generalization performance. The performance of these methods is evaluated by benchmark experiments using last_exam17 standard datasets and a face classification experiment.  相似文献   

8.
In this paper, we consider the problem of estimating the number of components of a superimposed nonlinear sinusoids model of a signal in the presence of additive noise. We propose and provide a detailed empirical comparison of robust methods for estimation of the number of components. The proposed methods, which are robust modifications of the commonly used information theoretic criteria, are based on various M-estimator approaches and are robust with respect to outliers present in the data and heavy-tailed noise. The proposed methods are compared with the usual non-robust methods through extensive simulations under varied model scenarios. We also present real signal analysis of two speech signals to show the usefulness of the proposed methodology.  相似文献   

9.
In this article, a novel hybrid method to forecast stock price is proposed. This hybrid method is based on wavelet transform, wavelet denoising, linear models (autoregressive integrated moving average (ARIMA) model and exponential smoothing (ES) model), and nonlinear models (BP Neural Network and RBF Neural Network). The wavelet transform provides a set of better-behaved constitutive series than stock series for prediction. Wavelet denoising is used to eliminate some slight random fluctuations of stock series. ARIMA model and ES model are used to forecast the linear component of denoised stock series, and then BP Neural Network and RBF Neural Network are developed as tools for nonlinear pattern recognition to correct the estimation error of the prediction of linear models. The proposed method is examined in the stock market of Shanghai and Shenzhen and the results are compared with some of the most recent stock price forecast methods. The results show that the proposed hybrid method can provide a considerable improvement for the forecasting accuracy. Meanwhile, this proposed method can also be applied to analysis and forecast reliability of products or systems and improve the accuracy of reliability engineering.  相似文献   

10.
Partially linear regression models are semiparametric models that contain both linear and nonlinear components. They are extensively used in many scientific fields for their flexibility and convenient interpretability. In such analyses, testing the significance of the regression coefficients in the linear component is typically a key focus. Under the high-dimensional setting, i.e., “large p, small n,” the conventional F-test strategy does not apply because the coefficients need to be estimated through regularization techniques. In this article, we develop a new test using a U-statistic of order two, relying on a pseudo-estimate of the nonlinear component from the classical kernel method. Using the martingale central limit theorem, we prove the asymptotic normality of the proposed test statistic under some regularity conditions. We further demonstrate our proposed test's finite-sample performance by simulation studies and by analyzing some breast cancer gene expression data.  相似文献   

11.
The effect of nonstationarity in time series columns of input data in principal components analysis is examined. Nonstationarity are very common among economic indicators collected over time. They are subsequently summarized into fewer indices for purposes of monitoring. Due to the simultaneous drifting of the nonstationary time series usually caused by the trend, the first component averages all the variables without necessarily reducing dimensionality. Sparse principal components analysis can be used, but attainment of sparsity among the loadings (hence, dimension-reduction is achieved) is influenced by the choice of parameter(s) (λ 1,i ). Simulated data with more variables than the number of observations and with different patterns of cross-correlations and autocorrelations were used to illustrate the advantages of sparse principal components analysis over ordinary principal components analysis. Sparse component loadings for nonstationary time series data can be achieved provided that appropriate values of λ 1,j are used. We provide the range of values of λ 1,j that will ensure convergence of the sparse principal components algorithm and consequently achieve sparsity of component loadings.  相似文献   

12.
An exploratory model analysis device we call CDF knotting is introduced. It is a technique we have found useful for exploring relationships between points in the parameter space of a model and global properties of associated distribution functions. It can be used to alert the model builder to a condition we call lack of distinguishability which is to nonlinear models what multicollinearity is to linear models. While there are simple remedial actions to deal with multicollinearity in linear models, techniques such as deleting redundant variables in those models do not have obvious parallels for nonlinear models. In some of these nonlinear situations, however, CDF knotting may lead to alternative models with fewer parameters whose distribution functions are very similar to those of the original overparameterized model. We also show how CDF knotting can be exploited as a mathematical tool for deriving limiting distributions and illustrate the technique for the 3-parameterWeibull family obtaining limiting forms and moment ratios which correct and extend previously published results. Finally, geometric insights obtained by CDF knotting are verified relative to data fitting and estimation.  相似文献   

13.
Summary.  The problem of component choice in regression-based prediction has a long history. The main cases where important choices must be made are functional data analysis, and problems in which the explanatory variables are relatively high dimensional vectors. Indeed, principal component analysis has become the basis for methods for functional linear regression. In this context the number of components can also be interpreted as a smoothing parameter, and so the viewpoint is a little different from that for standard linear regression. However, arguments for and against conventional component choice methods are relevant to both settings and have received significant recent attention. We give a theoretical argument, which is applicable in a wide variety of settings, justifying the conventional approach. Although our result is of minimax type, it is not asymptotic in nature; it holds for each sample size. Motivated by the insight that is gained from this analysis, we give theoretical and numerical justification for cross-validation choice of the number of components that is used for prediction. In particular we show that cross-validation leads to asymptotic minimization of mean summed squared error, in settings which include functional data analysis.  相似文献   

14.
Hea-Jung Kim  Taeyoung Roh 《Statistics》2013,47(5):1082-1111
In regression analysis, a sample selection scheme often applies to the response variable, which results in missing not at random observations on the variable. In this case, a regression analysis using only the selected cases would lead to biased results. This paper proposes a Bayesian methodology to correct this bias based on a semiparametric Bernstein polynomial regression model that incorporates the sample selection scheme into a stochastic monotone trend constraint, variable selection, and robustness against departures from the normality assumption. We present the basic theoretical properties of the proposed model that include its stochastic representation, sample selection bias quantification, and hierarchical model specification to deal with the stochastic monotone trend constraint in the nonparametric component, simple bias corrected estimation, and variable selection for the linear components. We then develop computationally feasible Markov chain Monte Carlo methods for semiparametric Bernstein polynomial functions with stochastically constrained parameter estimation and variable selection procedures. We demonstrate the finite-sample performance of the proposed model compared to existing methods using simulation studies and illustrate its use based on two real data applications.  相似文献   

15.
The aim of this paper was to develop a national customer satisfaction index (CSI) in Jordan and to derive its theory using generalized maximum entropy. During the course of this research, we conducted two different surveys to complete the framework of this CSI. The first one is a pilot study conducted based on a CSI basket in order to select the main factors that comprise the Jordanian customer satisfaction index (JCSI). Based on two different analyses, namely nonlinear principal component analysis and factor analysis, the explained variances in the first and second dimensions were 50.32 and 16.99% respectively. Also, Cronbach coefficients α in the first and second dimensions were 0.923 and 0.521, respectively. The results of this survey suggests the inclusion of loyalty, complaint, expectation, image and service quality as the main CS factors of our proposed model. The second study is a practical implementation conducted on the Vocational Training Corporation in order to evaluate the proposed JCSI. The results indicated that the suggested components of the proposed model are significant and form a good fitted model. We used the comparative fit index and the normed fit index as goodness-of-fit measures to evaluate the effectiveness of our proposed model. Both measures indicated that the proposed model is a promising one.  相似文献   

16.
This paper describes the modelling and fitting of Gaussian Markov random field spatial components within a Generalized AdditiveModel for Location, Scale and Shape (GAMLSS) model. This allows modelling of any or all the parameters of the distribution for the response variable using explanatory variables and spatial effects. The response variable distribution is allowed to be a non-exponential family distribution. A new package developed in R to achieve this is presented. We use Gaussian Markov random fields to model the spatial effect in Munich rent data and explore some features and characteristics of the data. The potential of using spatial analysis within GAMLSS is discussed. We argue that the flexibility of parametric distributions, ability to model all the parameters of the distribution and diagnostic tools of GAMLSS provide an ideal environment for modelling spatial features of data.  相似文献   

17.
It is an important problem in reliability analysis to decide whether for a given k-out-of-n system the static or the sequential k-out-of-n model is appropriate. Often components are redundantly added to a system to protect against failure of the system. If the failure of any component of the system induces a higher rate of failure of the remaining components due to increased load, the sequential k-out-of-n model is appropriate. The increase of the failure rate of the remaining components after a failure of some component implies that the effects of the component redundancy are diminished. On the other hand, if all the components have the same failure distribution and whenever a failure occurs, the remaining components are not affected, the static k-out-of-n model is adequate. In this paper, we consider nonparametric hypothesis tests to make a decision between these two models. We analyze test statistics based on the profile score process as well as test statistics based on a multivariate intensity ratio and derive their asymptotic distribution. Finally, we compare the different test statistics.  相似文献   

18.
Wang  Dewei  Jiang  Chendi  Park  Chanseok 《Lifetime data analysis》2019,25(2):341-360

The load-sharing model has been studied since the early 1940s to account for the stochastic dependence of components in a parallel system. It assumes that, as components fail one by one, the total workload applied to the system is shared by the remaining components and thus affects their performance. Such dependent systems have been studied in many engineering applications which include but are not limited to fiber composites, manufacturing, power plants, workload analysis of computing, software and hardware reliability, etc. Many statistical models have been proposed to analyze the impact of each redistribution of the workload; i.e., the changes on the hazard rate of each remaining component. However, they do not consider how long a surviving component has worked for prior to the redistribution. We name such load-sharing models as memoryless. To remedy this potential limitation, we propose a general framework for load-sharing models that account for the work history. Through simulation studies, we show that an inappropriate use of the memoryless assumption could lead to inaccurate inference on the impact of redistribution. Further, a real-data example of plasma display devices is analyzed to illustrate our methods.

  相似文献   

19.
Abstract. In this paper, two non‐parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel‐based approaches. The second estimator involves sequential fitting by univariate local polynomial quantile regressions for each additive component with the other additive components replaced by the corresponding estimates from the first estimator. The purpose of the extra local averaging is to reduce the variance of the first estimator. We show that the second estimator achieves oracle efficiency in the sense that each estimated additive component has the same variance as in the case when all other additive components were known. Asymptotic properties are derived for both estimators under dependent processes that are strictly stationary and absolutely regular. We also provide a demonstrative empirical application of additive quantile models to ambulance travel times.  相似文献   

20.
Mixtures of multivariate t distributions provide a robust parametric extension to the fitting of data with respect to normal mixtures. In presence of some noise component, potential outliers or data with longer-than-normal tails, one way to broaden the model can be provided by considering t distributions. In this framework, the degrees of freedom can act as a robustness parameter, tuning the heaviness of the tails, and downweighting the effect of the outliers on the parameters estimation. The aim of this paper is to extend to mixtures of multivariate elliptical distributions some theoretical results about the likelihood maximization on constrained parameter spaces. Further, a constrained monotone algorithm implementing maximum likelihood mixture decomposition of multivariate t distributions is proposed, to achieve improved convergence capabilities and robustness. Monte Carlo numerical simulations and a real data study illustrate the better performance of the algorithm, comparing it to earlier proposals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号