首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In 2005 Lipovetsky and Conklin proposed an estimator, the two parameter ridge estimator (TRE), as an alternative to the ordinary least squares estimator (OLSE) and the ordinary ridge estimator (RE) in the presence of multicollinearity, and in 2006 Lipovetsky improved the two parameter model. In this paper, we introduce two new estimators, one of which is the modified two parameter ridge estimator (MTRE) defined by following Swindel's paper of 1976. The other one is the restricted two parameter ridge estimator (RTRE) which is derived by setting additional linear restrictions on the parameter vectors. This estimator is a generalization of the restricted least squares estimator (RLSE) and includes the restricted ridge estimator (RRE) proposed by Groß in 2003. A numerical example is provided and a simulation study is conducted for the comparisons of the RTRE with the OLSE, RLSE, RE, RRE and TRE.  相似文献   

2.
Two simple tests which allow for unequal sample sizes are considered for testing hypothesis for the common mean of two normal populations. The first test is an exact test of size a based on two available t-statistics based on single samples made exact through random allocation of α among the two available t-tests. The test statistic of the second test is a weighted average of two available t-statistics with random weights. It is shown that the first test is more efficient than the available two t-tests with respect to Bahadur asymptotic relative efficiency. It is also shown that the null distribution of the test statistic in the second test, which is similar to the one based on the normalized Graybill-Deal test statistic, converges to a standard normal distribution. Finally, we compare the small sample properties of these tests, those given in Zhou and Mat hew (1993), and some tests given in Cohen and Sackrowitz (1984) in a simulation study. In this study, we find that the second test performs better than the tests given in Zhou and Mathew (1993) and is comparable to the ones given in Cohen and Sackrowitz (1984) with respect to power..  相似文献   

3.
Abstract.  The spatial clustering of points from two or more classes (or species) has important implications in many fields and may cause segregation or association, which are two major types of spatial patterns between the classes. These patterns can be studied using a nearest neighbour contingency table (NNCT) which is constructed using the frequencies of nearest neighbour types. Three new multivariate clustering tests are proposed based on NNCTs using the appropriate sampling distribution of the cell counts in a NNCT. The null patterns considered are random labelling (RL) and complete spatial randomness (CSR) of points from two or more classes. The finite sample performance of these tests are compared with other tests in terms of empirical size and power. It is demonstrated that the newly proposed NNCT tests perform relatively well compared with their competitors and the tests are illustrated using two example data sets.  相似文献   

4.
Recently Bush and Ostrom (1979) settled most of the open questions with respect to inequivalent solutions of a class of semiregular (SR) designs which can be constructed from nets. This paper is a study of the same nature for two families of regular (R) designs derived from finite projective planes. One family presents no problems, but the other which is a ‘double’ family with two parameters is much more difficult. In fact it is here solved only for designs based on planes of orders 3, 4, 5 and 8. Certain general methods exist which are indicated, but we were unable to resolve even the case 7 using this technique.Basically we show the existence of either inequivalent solutions or show there is but one solution settling a number of open cases. In particular for the case λ1 = 2, λ2 = 1 we give new solutions to a number of D(2) designs or group divisible designs with two associate classes which have no repeated blocks in contrast with the published solutions which have this undesirable property for a number of applications.  相似文献   

5.
Three tests are considered concerning the common mean of two normal populations: (1) an F test based on a sample from one population, (2) a test based on the addition of the F statistics from independent samples from two popultions (proposed), and (3) a test based on the maximum of the F statistics from two independent samples from two populations. A condition under which test (2) is locally more powerful than test (1) is given. As the test statistic in test (2) does not follow a standard distribution, a formula for approximating the observed significance level is provided. A simulation study is used to compare the power of these tests.  相似文献   

6.
We first consider the problem of estimating the common mean of two normal distributions with unknown ordered variances. We give a broad class of estimators which includes the estimators proposed by Nair (1982) and Elfessi et al. (1992) and show that the estimators stochastically dominate the estimators which do not take into account the order restriction on variances, including the one given by Graybill and Deal (1959). Then we propose a broad class of individual estimators of two ordered means when unknown variances are ordered. We show that in estimating the mean with larger variance, estimators which do not take into account the order restriction on variances are stochastically dominated by the proposed class of estimators which take into account both order restrictions. However, in estimating the mean with smaller variance, similar improvement is not possible even in terms of mean squared error. We also show a domination result in the simultaneous estimation problem of two ordered means. Further, improving upon the unbiased estimators of the two means is discussed.  相似文献   

7.
This paper is concerned with model monitoring and quality control schemes, which are founded on a decision theoretic formulation. After identifying unacceptable weaknesses associated with Wald, sequential probability ratio test (SPRT) and Cuscore monitors, the Bayes decision monitor is developed. In particular, the paper focuses on what is termed a 'popular decision scheme' (PDS) for which the monitoring run loss functions are specified simply in terms of two indiff erence qualities. For most applications, the PDS results in forward cumulative sum tests of functions of the observations. For many exponential family applications, the PDS is equivalent to well-used SPRTs and Cusums. In particular, a neat interpretation of V-mask cusum chart settings is derived when simultaneously running two symmetric PDSs. However, apart from providing a decision theoretic basis for monitoring, sensible procedures occur in applications for which SPRTs and Cuscores are particularly unsatisfactory. Average run lengths (ARLs) are given for two special cases, and the inadequacy of the Wald and similar ARL approximations is revealed. Generalizations and applications to normal and dynamic linear models are discussed. The paper concludes by deriving conditions under which sequences of forward and backward sequential or Cusum chart tests are equivalent.  相似文献   

8.
In this paper, we propose a new augmented Dickey–Fuller-type test for unit roots which accounts for two structural breaks. We consider two different specifications: (a) two breaks in the level of a trending data series and (b) two breaks in the level and slope of a trending data series. The breaks whose time of occurrence is assumed to be unknown are modeled as innovational outliers and thus take effect gradually. Using Monte Carlo simulations, we show that our proposed test has correct size, stable power, and identifies the structural breaks accurately.  相似文献   

9.
A method for constructing two-stage (double samble) tests is presented which does not require the evaluation of complicated bivariate distribution function. The procedure results from a modification of Fisher's method for combining independent tests of significance and is distribution free in the way it combines the test results from the two sampies. However, the one sample test statistics for the two samples are assumed to have continuous distributions and may be parametric. A rule is also given or the selection of a particular test out of a family of possible two-stage tests which can be generated by this method. Specific examples are given and comparisons are made with two double sample tests which have previously been presented in the literature.  相似文献   

10.
A comparative study is made of three tests, developed by James (1951), Welch (1951) and Brown & Forsythe (1974). James presented two methods of which only one is considered in this paper. It is shown that this method gives better control over the size than the other two tests. None of these methods is uniformly more powerful than the other two. In some cases the tests of James and Welch reject a false null hypothesis more often than the test of Brown & Forsythe, but there are also situations in which it is the other way around.

We conclude that for implementation in a statistical software package the very complicated test of James is the most attractive. A practical disadvantage of this method can be overcome by a minor modification.  相似文献   

11.
Data in the form of proportions with extra-dispersion (over/under) arise in many biomedical, epidemiological, and toxicological applications. In some situations, two samples of data in the form of proportions with extra-dispersion arise in which the problem is to test the equality of the proportions in the two groups with unspecified and possibly unequal extra-dispersion parameters. This problem is analogous to the traditional Behrens-Fisher problem in which two normal population means with possibly unequal variances are compared. To deal with this problem we develop eight tests and compare them in terms of empirical size and power, using a simulation study. Simulations show that a C(α) test based on extended quasi-likelihood estimates of the nuisance parameters holds nominal level most effectively (close to the nominal level) and it is at least as powerful as any other statistic that is not liberal. It has the simplest formula, is based on estimates of the nuisance parameters only under the null hypothesis, and is easiest to calculate. Also, it is robust in the sense that no distributional assumption is required to develop this statistic.  相似文献   

12.
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator out-performs the classical estimators in all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.  相似文献   

13.
This paper deals with the problem of multicollinearity in a multiple linear regression model with linear equality restrictions. The restricted two parameter estimator which was proposed in case of multicollinearity satisfies the restrictions. The performance of the restricted two parameter estimator over the restricted least squares (RLS) estimator and the ordinary least squares (OLS) estimator is examined under the mean square error (MSE) matrix criterion when the restrictions are correct and not correct. The necessary and sufficient conditions for the restricted ridge regression, restricted Liu and restricted shrunken estimators, which are the special cases of the restricted two parameter estimator, to have a smaller MSE matrix than the RLS and the OLS estimators are derived when the restrictions hold true and do not hold true. Theoretical results are illustrated with numerical examples based on Webster, Gunst and Mason data and Gorman and Toman data. We conduct a final demonstration of the performance of the estimators by running a Monte Carlo simulation which shows that when the variance of the error term and the correlation between the explanatory variables are large, the restricted two parameter estimator performs better than the RLS estimator and the OLS estimator under the configurations examined.  相似文献   

14.
Youden (1953) discussed the practice of averaging the two most concordant observations in sets of three measurements as a method of estimating location. Distributional results for this estimator can be found in Seth (1950) and Lieblein (1952). It follows from their work that the sample median has smaller variance for normal and uniform populations. In this paper it is shown that themedian stochastically dominates the average of the two closest observations for uniform, normal, double–exponential and Cauchy populations and thus is the superior resistant estimator in these cases for a broad class of loss functions. However, an example is given in which, for a particular contaminaion model and loss function, the mean of the closest two observations has smaller risk than the median.  相似文献   

15.
One difficulty with developing multivariate attribute control charts is the lack of the related joint distribution. So, if it would be possible to generate the joint distribution of two (or more) attribute characteristics, then a bivaraite (or multivariate) attribute control chart can be developed based on Types I and II errors. Copula function is a solution to the matter. In this article, applying the copula function approach, we achieve the joint distribution of two correlated zero inflated Poisson (ZIP) distributions. Then, using this joint distribution, we develop a bivaraite control chart which can be used for monitoring correlated rare events. This copula-based bivariate ZIP control chart is compared with the simultaneous use of two separate univariate ZIP control charts. Based on the average run length (ARL) measure, it is shown that the proposed control chart is much better than the simultaneous use of two separate univariate charts. In addition, a real case study related to the environmental air in a sterilization process is investigated to show the applicability of the developed control chart.  相似文献   

16.
The analysis of two‐way contingency tables is common in clinical studies. In addition to summary counts and percentages, statistical tests or summary measures are often desired. If the data can be viewed as two categorical measurements on the same experimental unit (matched pair data) then a test of marginal homogeneity may be appropriate. The most common clinical example is the so called ‘shift table’ whereby a quantity is tested for change between two time points. The two principal marginal homogeneity tests are the Stuart Maxwell and Bhapkar tests. At present, SAS software does not compute either test directly (for tables with more than two categories) and a programmatic solution is required. Two examples of programmatic SAS code are found in the current literature. Although accurate in most instances, they fail to produce output for certain tables (‘special cases’). After summarizing the mathematics behind the two tests, a SAS macro is presented, which produces correct output for all tables. Finally, several examples are coded and presented with resultant output. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
Two-dimensional renewal functions, which are naturally extensions of one-dimensional renewal functions, have wide applicability in areas where two random variables are needed to characterize the underlying process. These functions satisfy the renewal equation, which is not amenable for analytical solutions. This paper proposes a simple approximation for the computation of the two- dimensional renewal function based only on the first two moments and the correlation coefficient of the variables. The approximation yields exact values of renewal function for bivariate exponential distribution function. Illustrations are presented to compare our approximation with that of Iskandar (1991) who provided a computational procedure which requires the use of the bivariate distribution function of the two variables. A two-dimensional warranty model is used to illustrate the approximation.  相似文献   

18.
One of the weaknesses of the ESACF approach for model identification by Tsay and Tiao(1984) is the ambiguity which can be caused by the elements which are in triangle but marginally larger than two standard deviation values. To avoid this drawback, a vector sample autocorrelaton function (VSACF) is defined and an automatic model identification procedure using the VSACF is developed. We illustrate this approach with four examples.  相似文献   

19.
The paper describes two regression models—principal components and maximum-likelihood factor analysis—which may be used when the stochastic predictor varibles are highly intereorrelated and/or contain measurement error. The two problems can occur jointly, for example in social-survey data where the true (but unobserved) covariance matrix can be singular. Departure from singularity of the sample dispersion matrix is then due to measurement error. We first consider the more elementary principal components regression model, where it is shown that it can be derived as a special case of (i) canonical correlation, and (ii) restricted least squares. The second part consists of the more general maximum-likelihood factor-analysis regression model, which is derived from the generalized inverse of the product of two singular matrices. Also, it is proved that factor-analysis regression can be considered as an instrumental variables estimator and therefore does not depend on whether factors have been “properly” identified in terms of substantive behaviour. Consequently the additional task of rotating factors to “simple structure” does not arise.  相似文献   

20.
In what follows, we introduce two Bayesian models for feature selection in high-dimensional data, specifically designed for the purpose of classification. We use two approaches to the problem: one which discards the components which have “almost constant” values (Model 1) and another which retains the components for which variations in-between the groups are larger than those within the groups (Model 2). We assume that p?n, i.e. the number of components p is much larger than the number of samples n, and that only few of those p components are useful for subsequent classification. We show that particular cases of the above two models recover familiar variance or ANOVA-based component selection. When one has only two classes and features are a priori independent, Model 2 reduces to the Feature Annealed Independence Rule (FAIR) introduced by Fan and Fan (2008) and can be viewed as a natural generalization of FAIR to the case of L>2 classes. The performance of the methodology is studies via simulations and using a biological dataset of animal communication signals comprising 43 groups of electric signals recorded from tropical South American electric knife fishes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号