首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

Likelihood ratio tests for a change in mean in a sequence of independent, normal random variables are based on the maximum two-sample t-statistic, where the maximum is taken over all possible changepoints. The maximum t-statistic has the undesirable characteristic that Type I errors are not uniformly distributed across possible changepoints. False positives occur more frequently near the ends of the sequence and occur less frequently near the middle of the sequence. In this paper we describe an alternative statistic that is based upon a minimum p-value, where the minimum is taken over all possible changepoints. The p-value at any particular changepoint is based upon both the two-sample t-statistic at that changepoint and the probability that the maximum two-sample t-statistic is achieved at that changepoint. The new statistic has a more uniform distribution of Type I errors across potential changepoints and it compares favorably with respect to statistical power, false discovery rates, and the mean square error of changepoint estimates.  相似文献   

2.
We consider the problem of estimating R=P(Y<X) when X and Y are independent Burr-type X random variables. We assume that the sample from each population contains one spurious observation. Bayes estimates are derived for exchangeable and identifiable cases. Monte Carlo simulation is carried out to compare the bias and the expected loss of R.  相似文献   

3.
This article extends the theoretical analysis of spurious relationship and considers the situation where the deterministic components of the processes generating individual series are independent heavy-tailed with structural changes. It shows when those sequences are used in ordinary least-square regression, the convenient t-statistic procedures wrongly indicate that (i) the spurious significance is established when regressing mean-stationary and trend-stationary series with structural changes, (ii) the spurious relationship occurs under broken mean-stationary and difference-stationary sequences, and (iii) the extent of spurious regression becomes stronger between difference-stationary and trend-stationary series in the presence of breaks. The spurious phenomenon is present regardless of the sample size and structural breaks taking place at the same points or not. Simulation experiments confirm our asymptotic results and reveal that the spurious effects are not only sensitive to the relative location of structural changes with the sample, but seriously depend on the stable indexes.  相似文献   

4.
A “spurious regression” is one in which the time-series variables are non stationary and independent. It is well known that in this context the OLS parameter estimates and the R 2 converge to functionals of Brownian motions, the “t-ratios” diverge in distribution, and the Durbin–Watson statistic converges in probability to zero. We derive corresponding results for some common tests for the normality and homoskedasticity of the errors in a spurious regression.  相似文献   

5.
In partly linear models, the dependence of the response y on (x T, t) is modeled through the relationship y=x T β+g(t)+?, where ? is independent of (x T, t). We are interested in developing an estimation procedure that allows us to combine the flexibility of the partly linear models, studied by several authors, but including some variables that belong to a non-Euclidean space. The motivating application of this paper deals with the explanation of the atmospheric SO2 pollution incidents using these models when some of the predictive variables belong in a cylinder. In this paper, the estimators of β and g are constructed when the explanatory variables t take values on a Riemannian manifold and the asymptotic properties of the proposed estimators are obtained under suitable conditions. We illustrate the use of this estimation approach using an environmental data set and we explore the performance of the estimators through a simulation study.  相似文献   

6.
ABSTRACT

In this article we evaluate the performance of a randomization test for a subset of regression coefficients in a linear model. This randomization test is based on random permutations of the independent variables. It is shown that the method maintains its level of significance, except for extreme situations, and has power that approximates the power of another randomization test, which is based on the permutation of residuals from the reduced model. We also show, via an example, that the method of permuting independent variables is more valuable than other randomization methods because it can be used in connection with the downweighting of outliers.  相似文献   

7.
ABSTRACT

Process capability indices measure the ability of a process to provide products that meet certain specifications. Few references deal with the capability of a process characterized by a functional relationship between a response variable and one or more explanatory variables, which is called profile. Specifically, there is not any reference analysing the capability of processes characterized by multivariate nonlinear profiles. In this paper, we propose a method to measure the capability of these processes, based on principal components for multivariate functional data and the concept of functional depth. A simulation study is conducted to assess the performance of the proposed method. An example from the sugar production illustrates the applicability of this approach.  相似文献   

8.
ABSTRACT

In some situations, for example, in biology or psychology studies, we wish to determine whether the linear relationship between response variable and predictor variables differs in two populations. The analysis of the covariance (ANCOVA) or, equivalently, the partial F-test approaches are the commonly used methods. In this study, the asymptotic distribution for the difference between two independent regression coefficients was established. The proposed method was used to derive the asymptotic confidence set for the difference between coefficients and hypothesis testing for the equality of the two regression models. Then a simulation study was conducted to compare the proposed method with the partial F method. The performance of the new method was comparable with that of the partial F method.  相似文献   

9.
ABSTRACT

The Mellin integral transform is widely used to find the distributions of products and quotients of independent random variables defined over the positive domain. But it is hardly used to derive the distributions defined over both positive and negative values of the random variables. In this paper, the Mellin integral transform is applied to obtain the doubly noncentral t density and its distribution function in convergent series forms.  相似文献   

10.
The authors consider the problem of estimating the density g of independent and identically distributed variables XI, from a sample Z1,… Zn such that ZI = XI + σ? for i = 1,…, n, and E is noise independent of X, with σ? having a known distribution. They present a model selection procedure allowing one to construct an adaptive estimator of g and to find nonasymptotic risk bounds. The estimator achieves the minimax rate of convergence, in most cases where lower bounds are available. A simulation study gives an illustration of the good practical performance of the method.  相似文献   

11.
Abstract

We consider a degradation model which is the sum of two independent processes: an homogeneous gamma process and a Brownian motion. This model is called perturbed gamma process. Based on independent copies of the perturbed gamma process observed at irregular instants we propose to estimate the unknown parameters of the model using the moment method. Some general conditions allow to derive the asymptotic behavior of the estimators. We also show that these general conditions are fulfilled for some specific observation schemes. Finally, we illustrate our method by a numerical study and an application to a real data set.  相似文献   

12.
Summary: The Hodrick-Prescott (HP) filter has become a widely used tool for detrending integrated time series. Even if the methodological literature sums up an extensive catalogue of severe criticism against an econometric analysis of HP filtered data, the original Hodrick and Prescott (1980, 1997) suggestion to measure the strength of association between economic variables by a regression analysis of corresponding HP filtered time series appears to be very popular. This might be justified if HP induced distortions were quantitatively negligible in empirical applications. However, the simulated regression analyses presented in our paper demonstrate that any attempts of inference based on HP prefiltered series are challenged by a serious risk of spurious regression results. We would like to thank the participants of the Fourth Workshop in Macroeconometrics at the Halle Institute for Economic Research for their comments on a preliminary version of this paper. We are also indebted to the participants of the Thirtieth Macromodels International Conference, in particular David Hendry, S?ren Johansen, Katarina Juselius and Helmut Lütkepohl, for stimulating discussions and fruitful suggestions which helped to improve our paper. Finally, Larry Arnoldy helped to improve the final version of the paper.  相似文献   

13.
This paper shows that when series are fractionally integrated, but unit root tests wrongly indicate that they are I(1), Johansen likelihood ratio (LR) tests tend to find too much spurious cointegration, while the Engle-Granger test presents a more robust performance. This result holds asymptotically as well as infinite samples. The different performance of these two methods is due to the fact that they are based on different principles. The Johansen procedure is based on maximizing correlations (canonical correlation) while Engle-Granger minimizes variances (in the spirit of principal components).  相似文献   

14.
ABSTRACT

We introduce a class of large Bayesian vector autoregressions (BVARs) that allows for non-Gaussian, heteroscedastic, and serially dependent innovations. To make estimation computationally tractable, we exploit a certain Kronecker structure of the likelihood implied by this class of models. We propose a unified approach for estimating these models using Markov chain Monte Carlo (MCMC) methods. In an application that involves 20 macroeconomic variables, we find that these BVARs with more flexible covariance structures outperform the standard variant with independent, homoscedastic Gaussian innovations in both in-sample model-fit and out-of-sample forecast performance.  相似文献   

15.
ABSTRACT

The display of the data by means of contingency tables is used in different approaches to statistical inference, for example, to broach the test of homogeneity of independent multinomial distributions. We develop a Bayesian procedure to test simple null hypotheses versus bilateral alternatives in contingency tables. Given independent samples of two binomial distributions and taking a mixed prior distribution, we calculate the posterior probability that the proportion of successes in the first population is the same as in the second. This posterior probability is compared with the p-value of the classical method, obtaining a reconciliation between both results, classical and Bayesian. The obtained results are generalized for r × s tables.  相似文献   

16.
ABSTRACT

The aim of this study is to investigate the impact of correlation structure, prevalence and effect size on the risk prediction model by using the change in the area under the receiver operating characteristic curve (ΔAUC), net reclassification improvement (NRI), and integrated discrimination improvement (IDI). In simulation study, the dataset is generated under different correlation structures, prevalences and effect sizes. We verify the simulation results with the real-data application. In conclusion, the correlation structure between the variables should be taken into account while composing a multivariable model. Negative correlation structure between independent variables is more beneficial while constructing a model.  相似文献   

17.
ABSTRACT

A variable selection procedure based on least absolute deviation (LAD) estimation and adaptive lasso (LAD-Lasso for short) is proposed for median regression models with doubly censored data. The proposed procedure can select significant variables and estimate the parameters simultaneously, and the resulting estimators enjoy the oracle property. Simulation results show that the proposed method works well.  相似文献   

18.
Abstract

Many methods used in spatial statistics are computationally demanding, and so, the development of more computationally efficient methods has received attention. A important development is the integrated nested Laplace approximation method which is carry out Bayesian analysis more efficiently This method, for geostatistical data, is done considering the SPDE approach that requires the creation of a mesh overlying the study area and all the obtained results depend on it. The impact of the mesh on inference and prediction is investigated through simulations. As there is no formal procedure to specify it, we investigate a guideline to create an optimal mesh.  相似文献   

19.
ABSTRACT

Regression analysis is one of the important tools in statistics to investigate the relationships among variables. When the sample size is small, however, the assumptions for regression analysis can be violated. This research focuses on using the exact bootstrap to construct confidence intervals for regression parameters in small samples. The comparison of the exact bootstrap method with the basic bootstrap method was carried out by a simulation study. It was found that on a very small sample (n ≈ 5) under Laplace distribution with the independent variable treated as random, the exact bootstrap was more effective than the standard bootstrap confidence interval.  相似文献   

20.
ABSTRACT

In this article, we consider the estimation of R = P(Y < X), when Y and X are two independent three-parameter Lindley (LI) random variables. On the basis of two independent samples, the modified maximum likelihood estimator along its asymptotic behavior and conditional likelihood-based estimator are used to estimate R. We also propose sample-based estimate of R and the associated credible interval based on importance sampling procedure. A real life data set involving the times to breakdown of an insulating fluid is presented and analyzed for illustrative purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号