首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
In this article, we consider the problem of testing (a) sphericity and (b) intraclass covariance structure under a growth curve model. The maximum likelihood estimator (MLE) for the mean in a growth curve model is a weighted estimator with the inverse of the sample covariance matrix which is unstable for large p close to N and singular for p larger than N. The MLE for the covariance matrix is based on the MLE for the mean, which can be very poor for p close to N. For both structures (a) and (b), we modify the MLE for the mean to an unweighted estimator and based on this estimator we propose a new estimator for the covariance matrix. This new estimator leads to new tests for (a) and (b). We also propose two other tests for each structure, which are just based on the sample covariance matrix.

To compare the performance of all four tests we compute for each structure (a) and (b) the attained significance level and the empirical power. We show that one of the tests based on the sample covariance matrix is better than the likelihood ratio test based on the MLE.  相似文献   


2.
In this article, we take a brief overview of different functional forms of generalized Poisson distribution (GPD) and various methods of its parameter estimation found in the literature. We compare the method of moment estimation (ME) and maximum likelihood estimation (MLE) of parameters of GPD through simulation study in terms of bias, MSE and covariance. To simulate random numbers from GPD, we develop a Matlab function gpoissrnd(). The simulation study leads to the important conclusion that the ME performs better or equally good as compared to MLE when sample size is small.

Further we fit the GPD to various datasets in literature using both estimation methods and observe that the results do not differ significantly even though the sample size is large. Overall, we conclude that for GPD, use of ME in place of MLE will lead to almost similar results. The computational simplicity in calculation of ME as compared to MLE also gives support to the use of ME in case of GPD for practitioners.  相似文献   


3.
In manufacturing industry, the lifetime performance index CL is applied to evaluate the larger-the-better quality features of products. It can quickly show whether the lifetime performance of products meets the desired level. In this article, first we obtain the maximum likelihood estimator of CL with two unknown parameters in the Lomax distribution on the basis of progressive type I interval censored sample. With the MLE we proposed, some asymptotic confidence intervals of CL are discussed by using the delta method. Furthermore, the MLE of CL is used to establish the hypothesis test procedure under a given lower specification limit L. In addition, we also conduct a hypothesis test procedure when the scale parameter in the Lomax distribution is given. Finally, we illustrate the proposed inspection procedures through a real example. The testing procedure algorithms presented in this paper are efficient and easy to implement.  相似文献   

4.
This article provides a procedure for the detection and identification of outliers in the spectral domain where the Whittle maximum likelihood estimator of the panel data model proposed by Chen [W.D. Chen, Testing for spurious regression in a panel data model with the individual number and time length growing, J. Appl. Stat. 33(88) (2006b), pp. 759–772] is implemented. We extend the approach of Chang and co-workers [I. Chang, G.C. Tiao, and C. Chen, Estimation of time series parameters in the presence of outliers, Technometrics 30 (2) (1988), pp. 193–204] to the spectral domain and through the Whittle approach we can quickly detect and identify the type of outliers. A fixed effects panel data model is used, in which the remainder disturbance is assumed to be a fractional autoregressive integrated moving-average (ARFIMA) process and the likelihood ratio criterion is obtained directly through the modified inverse Fourier transform. This saves much time, especially when the estimated model implements a huge data-set.

Through Monte Carlo experiments, the consistency of the estimator is examined by growing the individual number N and time length T, in which the long memory remainder disturbances are contaminated with two types of outliers: additive outlier and innovation outlier. From the power tests, we see that the estimators are quite successful and powerful.

In the empirical study, we apply the model on Taiwan's computer motherboard industry. Weekly data from 1 January 2000 to 31 October 2006 of nine familiar companies are used. The proposed model has a smaller mean square error and shows more distinctive aggressive properties than the raw data model does.  相似文献   


5.
6.
The 1978 European Community Typology for Agricultural Holdings is described in this paper and contrasted with a data based, polythetic-multivariate classification based on cluster analysis.

The requirement to reduce the size of the variable set employed in an optimisation-partition method of clustering suggested the value of principal components and factor analysis for the identification of major ‘source’ dimensions against which to measure farm differences and similarities.

The Euclidean cluster analysis incorporating the reduced dimensions quickly converged to a stable solution and was little influenced by the initial number or nature of ‘seeding’ partitions of the data.

The assignment of non-sampled observations from the population to cluster classes was completed using classification functions.

The final scheme, based on a sample of over 2,000 observations, was found to be both capable of interpretation and meaningful in terms of agricultural structure and practice and much superior in its explanatory power when compared with a version of the principal activity typology.  相似文献   


7.
Efficient, accurate, and fast Markov Chain Monte Carlo estimation methods based on the Implicit approach are proposed. In this article, we introduced the notion of Implicit method for the estimation of parameters in Stochastic Volatility models.

Implicit estimation offers a substantial computational advantage for learning from observations without prior knowledge and thus provides a good alternative to classical inference in Bayesian method when priors are missing.

Both Implicit and Bayesian approach are illustrated using simulated data and are applied to analyze daily stock returns data on CAC40 index.  相似文献   


8.
In this study an attempt is made to assess statistically the validity of two theories as to the origin of comets. This subject still leads to great controversy amongst astronomers but recently two main schools of thought have developed.

These are that comets are of

(i) planetary origin,

(ii) interstellar origin.

Many theories have been expanded within each school of thought but at the present time one theory in each is generally accepted. This paper sets out to identify the statistical implications of each theory and evaluate each theory in terms of their implications.  相似文献   


9.
Four procedures are suggested for estimating the parameter ‘a’ in the Pauling equation:

e-X/a+e ? Y/a = 1.

The procedures are: using the mean of individual solutions, least squares with Y the subject of the equation, least squares with X the subject of the equation and maximum likelihood using a statistical model. In order to compare these estimates, we use Efron's bootstrap technique (1979), since distributional results are not available. This example also illustrates the role of the bootstrap in statistical inference.  相似文献   


10.
Permutation tests for symmetry are suggested using data that are subject to right censoring. Such tests are directly relevant to the assumptions that underlie the generalized Wilcoxon test since the symmetric logistic distribution for log-errors has been used to motivate Wilcoxon scores in the censored accelerated failure time model. Its principal competitor is the log-rank (LGR) test motivated by an extreme value error distribution that is positively skewed. The proposed one-sided tests for symmetry against the alternative of positive skewness are directly relevant to the choice between usage of these two tests.

The permutation tests use statistics from the weighted LGR class normally used for making two-sample comparisons. From this class, the test using LGR weights (all weights equal) showed the greatest discriminatory power in simulations that compared the possibility of logistic errors versus extreme value errors.

In the test construction, a median estimate, determined by inverting the Kaplan–Meier estimator, is used to divide the data into a “control” group to its left that is compared with a “treatment” group to its right. As an unavoidable consequence of testing symmetry, data in the control group that have been censored become uninformative in performing this two-sample test. Thus, early heavy censoring of data can reduce the effective sample size of the control group and result in diminished power for discriminating symmetry in the population distribution.  相似文献   


11.
We consider wavelet-based non linear estimators, which are constructed by using the thresholding of the empirical wavelet coefficients, for the mean regression functions with strong mixing errors and investigate their asymptotic rates of convergence. We show that these estimators achieve nearly optimal convergence rates within a logarithmic term over a large range of Besov function classes Bsp, q. The theory is illustrated with some numerical examples.

A new ingredient in our development is a Bernstein-type exponential inequality, for a sequence of random variables with certain mixing structure and are not necessarily bounded or sub-Gaussian. This moderate deviation inequality may be of independent interest.  相似文献   


12.
13.
Asymptotically valid inference in linear regression models is easily achieved under mild conditions using the well-known Eicker–White heteroskedasticity–robust covariance matrix estimator or one of its variant. In finite sample however, such inferences can suffer from substantial size distortion. Indeed, it is well established in the literature that the finite sample accuracy of a test may depend on which variant of the Eicker–White estimator is used, on the underlying data generating process (DGP) and on the desired level of the test.

This paper develops a new variant of the Eicker–White estimator which explicitly aims to minimize the finite sample null error in rejection probability (ERP) of the test. This is made possible by selecting the transformation of the squared residuals which results in the smallest possible ERP through a numerical algorithm based on the wild bootstrap. Monte Carlo evidence indicates that this new procedure achieves a level of robustness to the DGP, sample size and nominal testing level unequaled by any other Eicker–White estimator based asymptotic test.  相似文献   


14.
This paper considers the constant-partially accelerated life tests for series system products, where dependent M-O bivariate exponential distribution is assumed for the components.

Based on progressive type-II censored and masked data, the maximum likelihood estimates for the parameters and acceleration factors are obtained by using the decomposition approach. In addition, this method can also be applied to the Bayes estimates, which are too complex to obtain as usual way. Finally, a Monte Carlo simulation study is carried out to verify the accuracy of the methods under different masking probabilities and censoring schemes.  相似文献   


15.
16.
The concept of ranked set sampling (RSS) is applicable whenever ranking on a set of sampling units can be done easily using a judgment method or based on an auxiliary variable. In this paper, we consider a study variable Y correlated with the auxiliary variable X and use it to rank the sampling units. Further (X,Y) is assumed to have Cambanis-type bivariate uniform (CTBU) distribution. We obtain an unbiased estimator of a scale parameter associated with the study variable Y based on different RSS schemes. We perform the efficiency comparison of the proposed estimators numerically. We present the trends in the efficiency performance of estimators under various RSS schemes with respect to parameters through line and surface plots. Further, we develop a Matlab function to simulate data from CTBU distribution and present the performance of proposed estimators through a simulation study. The results developed are implemented to real-life data also.KEYWORDS: Ranked set sampling, concomitants of order statistics, Cambanis-type bivariate uniform distribution, best linear unbiased estimatorSUBJECT CLASSIFICATIONS: 62D05, 62F07, 62G30  相似文献   

17.
We define a new family of stochastic processes called Markov modulated Brownian motions with a sticky boundary at zero. Intuitively, each process is a regulated Markov-modulated Brownian motion whose boundary behavior is modified to slow down at level zero.

To determine the stationary distribution of a sticky MMBM, we follow a Markov-regenerative approach similar to the one developed with great success in the context of quasi-birth-and-death processes and fluid queues. Our analysis also relies on recent work showing that Markov-modulated Brownian motions arise as limits of a parametrized family of fluid queues.  相似文献   


18.
In this paper, we study, by means of randomized sampling, the long-run stability of some open Markov population fed with time-dependent Poisson inputs. We show that state probabilities within transient states converge—even when the overall expected population dimension increases without bound—under general conditions on the transition matrix and input intensities.

Following the convergence results, we obtain ML estimators for a particular sequence of input intensities, where the sequence of new arrivals is modeled by a sigmoidal function. These estimators allow for the forecast, by confidence intervals, of the evolution of the relative population structure in the transient states.

Applying these results to the study of a consumption credit portfolio, we estimate the implicit default rate.  相似文献   


19.
According to the last proposals by the Basel Committee, banks are allowed to use statistical approaches for the computation of their capital charge covering financial risks such as credit risk, market risk and operational risk.

It is widely recognized that internal loss data alone do not suffice to provide accurate capital charge in financial risk management, especially for high-severity and low-frequency events. Financial institutions typically use external loss data to augment the available evidence and, therefore, provide more accurate risk estimates. Rigorous statistical treatments are required to make internal and external data comparable and to ensure that merging the two databases leads to unbiased estimates.

The goal of this paper is to propose a correct statistical treatment to make the external and internal data comparable and, therefore, mergeable. Such methodology augments internal losses with relevant, rather than redundant, external loss data.  相似文献   


20.
Alternative methods of trend extraction and of seasonal adjustment are described that operate in the time domain and in the frequency domain.

The time-domain methods that are implemented in the TRAMO–SEATS and the STAMP programs are compared. An abbreviated time-domain method of seasonal adjustment that is implemented in the IDEOLOG program is also presented. Finite-sample versions of the Wiener–Kolmogorov filter are described that can be used to implement the methods in a common way.

The frequency-domain method, which is also implemented in the IDEOLOG program, employs an ideal frequency selective filter that depends on identifying the ordinates of the Fourier transform of a detrended data sequence that should lie in the pass band of the filter and those that should lie in its stop band. Filters of this nature can be used both for extracting a low-frequency cyclical component of the data and for extracting the seasonal component.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号