共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article we study the problem of classification of three-level multivariate data, where multiple q-variate observations are measured on u-sites and over p-time points, under the assumption of multivariate normality. The new classification rules with certain structured and unstructured mean vectors and covariance structures are very efficient in small sample scenario, when the number of observations is not adequate to estimate the unknown variance–covariance matrix. These classification rules successfully model the correlation structure on successive repeated measurements over time. Computation algorithms for maximum likelihood estimates of the unknown population parameters are presented. Simulation results show that the introduction of sites in the classification rules improves their performance over the existing classification rules without the sites. 相似文献
2.
Reduced-rank regression models proposed by Anderson [1951. Estimating linear restrictions on regression coefficients for multivariate normal distributions. Ann. Math. Statist. 22, 327–351] have been used in various applications in social and natural sciences. In this paper we combine the features of these models with another popular, seemingly unrelated regression model proposed by Zellner [1962. An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. J. Amer. Statist. Assoc. 57, 348–368]. In addition to estimation and inference aspects of the new model, we also discuss an application in the area of marketing. 相似文献
3.
In this paper, we expand a first-order nonlinear autoregressive (AR) model with skew normal innovations. A semiparametric method is proposed to estimate a nonlinear part of model by using the conditional least squares method for parametric estimation and the nonparametric kernel approach for the AR adjustment estimation. Then computational techniques for parameter estimation are carried out by the maximum likelihood (ML) approach using Expectation-Maximization (EM) type optimization and the explicit iterative form for the ML estimators are obtained. Furthermore, in a simulation study and a real application, the accuracy of the proposed methods is verified. 相似文献
4.
In this paper we discuss some problems of existing methods for calculating the Value-at-Risk (VaR) in ARCH setting. It should be noted that the commonly used approaches often confuse the true innovations with the empirical residuals, i.e., estimation errors for unknown ARCH parameters are ignored. We adjust this by using the asymptotics of the residual empirical process, and propose a feasible VaR which, according to the spirit of VaR, keeps the assets away from a specified risk with high confidence level. Its meaningfulness in comparison with the usual VaR will be illustrated clearly by numerical studies. 相似文献
5.
In this paper, the hypothesis testing and interval estimation for the intraclass correlation coefficients are considered in a two-way random effects model with interaction. Two particular intraclass correlation coefficients are described in a reliability study. The tests and confidence intervals for the intraclass correlation coefficients are developed when the data are unbalanced. One approach is based on the generalized p-value and generalized confidence interval, the other is based on the modified large-sample idea. These two approaches simplify to the ones in Gilder et al. [2007. Confidence intervals on intraclass correlation coefficients in a balanced two-factor random design. J. Statist. Plann. Inference 137, 1199–1212] when the data are balanced. Furthermore, some statistical properties of the generalized confidence intervals are investigated. Finally, some simulation results to compare the performance of the modified large-sample approach with that of the generalized approach are reported. The simulation results indicate that the modified large-sample approach performs better than the generalized approach in the coverage probability and expected length of the confidence interval. 相似文献
6.
7.
In this article, robust estimation and prediction in multivariate autoregressive models with exogenous variables (VARX) are considered. The conditional least squares (CLS) estimators are known to be non-robust when outliers occur. To obtain robust estimators, the method introduced in Duchesne [2005. Robust and powerful serial correlation tests with new robust estimates in ARX models. J. Time Ser. Anal. 26, 49–81] and Bou Hamad and Duchesne [2005. On robust diagnostics at individual lags using RA-ARX estimators. In: Duchesne, P., Rémillard, B. (Eds.), Statistical Modeling and Analysis for Complex Data Problems. Springer, New York] is generalized for VARX models. The asymptotic distribution of the new estimators is studied and from this is obtained in particular the asymptotic covariance matrix of the robust estimators. Classical conditional prediction intervals normally rely on estimators such as the usual non-robust CLS estimators. In the presence of outliers, such as additive outliers, these classical predictions can be severely biased. More generally, the occurrence of outliers may invalidate the usual conditional prediction intervals. Consequently, the new robust methodology is used to develop robust conditional prediction intervals which take into account parameter estimation uncertainty. In a simulation study, we investigate the finite sample properties of the robust prediction intervals under several scenarios for the occurrence of the outliers, and the new intervals are compared to non-robust intervals based on classical CLS estimators. 相似文献
8.
In this paper, the limit distribution of the least squares estimator for mildly explosive autoregressive models with strong mixing innovations is established, which is shown to be Cauchy as in the iid case. The result is applied to identify the onset and the end of an explosive period of an econometric time series. Simulations and data analysis are also conducted to demonstrate the usefulness of the result. 相似文献
9.
For a GARCH(1,1) sequence or an AR(1) model with ARCH(1) errors, one can estimate the tail index by solving an estimating equation with unknown parameters replaced by the quasi maximum likelihood estimation, and a profile empirical likelihood method can be employed to effectively construct a confidence interval for the tail index. However, this requires that the errors of such a model have at least a finite fourth moment. In this article, we show that the finite fourth moment can be relaxed by employing a least absolute deviations estimate for the unknown parameters by noting that the estimating equation for determining the tail index is invariant to a scale transformation of the underlying model. 相似文献
10.
This paper gives a comparative study of the K-means algorithm and the mixture model (MM) method for clustering normal data. The EM algorithm is used to compute the maximum likelihood estimators (MLEs) of the parameters of the MM model. These parameters include mixing proportions, which may be thought of as the prior probabilities of different clusters; the maximum posterior (Bayes) rule is used for clustering. Hence, asymptotically the MM method approaches the Bayes rule for known parameters, which is optimal in terms of minimizing the expected misclassification rate (EMCR). 相似文献
11.
This paper combines two ideas to construct autoregressive processes of arbitrary order. The first idea is the construction of first order stationary processes described in Pitt et al. [(2002). Constructing first order autoregressive models via latent processes. Scand. J. Statist.29, 657–663] and the second idea is the construction of higher order processes described in Raftery [(1985). A model for high order Markov chains. J. Roy. Statist. Soc. B.47, 528–539]. The resulting models provide appealing alternatives to model non-linear and non-Gaussian time series. 相似文献
12.
We consider estimation of a missing value for a stationary autoregressive process of order one with exponential innovations and compare two methods of estimation of the missing value, with respect to Pitman's measure of closeness (PMC). 相似文献
13.
This paper discusses asymptotic expansions for the null distributions of some test statistics for profile analysis under non-normality. It is known that the null distributions of these statistics converge to chi-square distribution under normality [Siotani, M., 1956. On the distributions of the Hotelling's T2-statistics. Ann. Inst. Statist. Math. Tokyo 8, 1–14; Siotani, M., 1971. An asymptotic expansion of the non-null distributions of Hotelling's generalized T2-statistic. Ann. Math. Statist. 42, 560–571]. We extend this result by obtaining asymptotic expansions under general distributions. Moreover, the effect of non-normality is also considered. In order to obtain all the results, we make use of matrix manipulations such as direct products and symmetric tensor, rather than usual elementwise tensor notation. 相似文献
14.
It is common in parametric bootstrap to select the model from the data, and then treat as if it were the true model. Chatfield
(1993, 1996) has shown that ignoring the model uncertainty may seriously undermine the coverage accuracy of prediction intervals.
In this paper, we propose a method based on moving block bootstrap for introducing the model selection step in the resampling
algorithm. We present a Monte Carlo study comparing the finite sample properties of the proposel method with those of alternative
methods in the case of prediction intervas. 相似文献
15.
Efficient inference for regression models requires that the heteroscedasticity be taken into account. We consider statistical inference under heteroscedasticity in a semiparametric measurement error regression model, in which some covariates are measured with errors. This paper has multiple components. First, we propose a new method for testing the heteroscedasticity. The advantages of the proposed method over the existing ones are that it does not need any nonparametric estimation and does not involve any mismeasured variables. Second, we propose a new two-step estimator for the error variances if there is heteroscedasticity. Finally, we propose a weighted estimating equation-based estimator (WEEBE) for the regression coefficients and establish its asymptotic properties. Compared with existing estimators, the proposed WEEBE is asymptotically more efficient, avoids undersmoothing the regressor functions and requires less restrictions on the observed regressors. Simulation studies show that the proposed test procedure and estimators have nice finite sample performance. A real data set is used to illustrate the utility of our proposed methods. 相似文献
16.
The Lagrange Multiplier (LM) test is one of the principal tools to detect ARCH and GARCH effects in financial data analysis. However, when the underlying data are non‐normal, which is often the case in practice, the asymptotic LM test, based on the χ2‐approximation of critical values, is known to perform poorly, particularly for small and moderate sample sizes. In this paper we propose to employ two re‐sampling techniques to find critical values of the LM test, namely permutation and bootstrap. We derive the properties of exactness and asymptotically correctness for the permutation and bootstrap LM tests, respectively. Our numerical studies indicate that the proposed re‐sampled algorithms significantly improve size and power of the LM test in both skewed and heavy‐tailed processes. We also illustrate our new approaches with an application to the analysis of the Euro/USD currency exchange rates and the German stock index. The Canadian Journal of Statistics 40: 405–426; 2012 © 2012 Statistical Society of Canada 相似文献
17.
We propose optimal procedures to achieve the goal of partitioning k multivariate normal populations into two disjoint subsets with respect to a given standard vector. Definition of good or bad multivariate normal populations is given according to their Mahalanobis distances to a known standard vector as being small or large. Partitioning k multivariate normal populations is reduced to partitioning k non-central Chi-square or non-central F distributions with respect to the corresponding non-centrality parameters depending on whether the covariance matrices are known or unknown. The minimum required sample size for each population is determined to ensure that the probability of correct decision attains a certain level. An example is given to illustrate our procedures. 相似文献
18.
There is a close analogy between empirical distributions of i.i.d. random variables and normalized spectral distributions of wide-sense stationary processes. Herein we make use of this analogy to develop nonparametric comparisons of two spectral distributions and nonparametric tests of stationarity versus change-point alternatives via spectral analysis of a time series. 相似文献
19.
This paper proposes a method for obtaining the exact probability of occurrence of the first success run of specified length with the additional constraint that at every trial until the occurrence of the first success run the number of successes up to the trial exceeds that of failures. For the sake of the additional constraint, the problem cannot be solved by the usual method of conditional probability generating functions. An idea of a kind of truncation is introduced and studied in order to solve the problem. Concrete methods for obtaining the probability in the cases of Bernoulli trials and time-homogeneous {0,1}-valued Markov dependent trials are given. As an application of the results, a modification of the start-up demonstration test is studied. Numerical examples which illustrate the feasibility of the results are also given. 相似文献
20.
A new family of kernels is suggested for use in long run variance (LRV) estimation and robust regression testing. The kernels are constructed by taking powers of the Bartlett kernel and are intended to be used with no truncation (or bandwidth) parameter. As the power parameter (ρ) increases, the kernels become very sharp at the origin and increasingly downweight values away from the origin, thereby achieving effects similar to a bandwidth parameter. Sharp origin kernels can be used in regression testing in much the same way as conventional kernels with no truncation, as suggested in the work of Kiefer and Vogelsang [2002a, Heteroskedasticity-autocorrelation robust testing using bandwidth equal to sample size. Econometric Theory 18, 1350–1366, 2002b, Heteroskedasticity-autocorrelation robust standard errors using the Bartlett kernel without truncation, Econometrica 70, 2093–2095] Analysis and simulations indicate that sharp origin kernels lead to tests with improved size properties relative to conventional tests and better power properties than other tests using Bartlett and other conventional kernels without truncation. 相似文献