首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A nonparametric test procedure is proposed for the analysis of randomized complete block designs. Such a procedure may be carried out graphically in the form of a Shewhart control chart. Exact and asymptotic critical values are given for the implementation of the proposed procedure. A Monte Carlo study is made to compare the powers of the proposed procedure to those of analysis of variance, the analysis of means, and the Friedman procedures. Results of the study indicate that the proposed procedure has superior power performance when testing against slippage alternative hypotheses under heavy-tailed distributions such as the Cauchy distribution. However, when testing against symmetric alternatives under light-tailed distributions, the proposed procedure does not perform well  相似文献   

2.
Abstract

The problem of testing equality of two multivariate normal covariance matrices is considered. Assuming that the incomplete data are of monotone pattern, a quantity similar to the Likelihood Ratio Test Statistic is proposed. A satisfactory approximation to the distribution of the quantity is derived. Hypothesis testing based on the approximate distribution is outlined. The merits of the test are investigated using Monte Carlo simulation. Monte Carlo studies indicate that the test is very satisfactory even for moderately small samples. The proposed methods are illustrated using an example.  相似文献   

3.
The F test is compared with three procedures based on ranks for testing treatment effects in the randomized complete block, fixed effects, model with one observation per cell.  相似文献   

4.
A full likelihood method is proposed to analyse continuous longitudinal data with non-ignorable (informative) missing values and non-monotone patterns. The problem arose in a breast cancer clinical trial where repeated assessments of quality of life were collected: patients rated their coping ability during and after treatment. We allow the missingness probabilities to depend on unobserved responses, and we use a multivariate normal model for the outcomes. A first-order Markov dependence structure for the responses is a natural choice and facilitates the construction of the likelihood; estimates are obtained via the Nelder–Mead simplex algorithm. Computations are difficult and become intractable with more than three or four assessments. Applying the method to the quality-of-life data results in easily interpretable estimates, confirms the suspicion that the data are non-ignorably missing and highlights the likely bias of standard methods. Although treatment comparisons are not affected here, the methods are useful for obtaining unbiased means and estimating trends over time.  相似文献   

5.
Searle and Rudan (1973) derive the inverse of a covariance matrix for unbalanced data in ANOVA. Their expression is highly complicated. This paper presents an alternative derivation and shows how unbalancedness enters in.  相似文献   

6.
Three modified tests for homogeneity of the odds ratio for a series of 2 × 2 tables are studied when the data are clustered. In the case of clustered data, the standard tests for homogeneity of odds ratios ignore the variance inflation caused by positive correlation among responses of subjects within the same cluster, and therefore have inflated Type I error. The modified tests adjust for the variance inflation in the three existing standard tests: Breslow–Day, Tarone and the conditional score test. The degree of clustering effect is measured by the intracluster correlation coefficient, ρ. A variance correction factor derived from ρ is then applied to the variance estimator in the standard tests of homogeneity of the odds ratio. The proposed tests are an application of the variance adjustment method commonly used in correlated data analysis and are shown to maintain the nominal significance level in a simulation study. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
This paper introduces a nonparametric approach for testing the equality of two or more survival distributions based on right censored failure times with missing population marks for the censored observations. The standard log-rank test is not applicable here because the population membership information is not available for the right censored individuals. We propose to use the imputed population marks for the censored observations leading to fractional at-risk sets that can be used in a two sample censored data log-rank test. We demonstrate with a simple example that there could be a gain in power by imputing population marks (the proposed method) for the right censored individuals compared to simply removing them (which also would maintain the right size). Performance of the imputed log-rank tests obtained this way is studied through simulation. We also obtain an asymptotic linear representation of our test statistic. Our testing methodology is illustrated using a real data set.  相似文献   

8.
In this paper, Yate's missing plot technique is used to derive the formula for substitution in a missing plot in a general incomplete block design, where blocks are assumed to be independent normal. The use of penalized normal equations, using BLUPS, makes this task simpler.  相似文献   

9.
The randomized block design is routinely employed in the social and biopharmaceutical sciences. With no missing values, analysis of variance (AOV) can be used to analyze such experiments. However, if some data are missing, the AOV formulae are no longer applicable, and iterative methods such as restricted maximum likelihood (REML) are recommended, assuming block effects are treated as random. Despite the well-known advantages of REML, methods like AOV based on complete cases (blocks) only (CC-AOV) continue to be used by researchers, particularly in situations where routinely only a few missing values are encountered. Reasons for this appear to include a natural proclivity for non-iterative, summary-statistic-based methods, and a presumption that CC-AOV is only trivially less efficient than REML with only a few missing values (say≤10%). The purpose of this note is two-fold. First, to caution that CC-AOV can be considerably less powerful than REML even with only a few missing values. Second, to offer a summary-statistic-based, pairwise-available-case-estimation (PACE) alternative to CC-AOV. PACE, which is identical to AOV (and REML) with no missing values, outperforms CC-AOV in terms of statistical power. However, it is recommended in lieu of REMLonly if software to implement the latter is unavailable, or the use of a “transparent” formula-based approach is deemed necessary. An example using real data is provided for illustration.  相似文献   

10.
In this article, we study global L2 error of non linear wavelet estimator of density in the Besov space Bspq for missing data model when covariables are present and prove that the estimator can achieve the optimal rate of convergence, which is similar to the result studied by Donoho et al. (1996) Donoho, D.L., Johnstone, I.M., Kerkyacharian, G., Picard, D. (1996). Density estimation by wavelet thresholding. Ann. Stat. 24:508539.[Crossref], [Web of Science ®] [Google Scholar] in complete independent data case with term-by-term thresholding of the empirical wavelet coefficients. Finite-sample behavior of the proposed estimator is explored via simulations.  相似文献   

11.
Competing risks often occur when subjects may fail from one of several mutually exclusive causes. For example, when a patient suffering a cancer may die from other cause, we are interested in the effect of a certain covariate on the probability of dying of cancer at a certain time. Several approaches have been suggested to analyse competing risk data in the presence of complete information of failure cause. In this paper, our interest is to consider the occurrence of missing causes as well as interval censored failure time. There exist no method to discuss this problem. We applied a Klein–Andersen's pseudo-value approach [Klein, JP Andersen PK. Regression modeling of competing risks data based on pseudovalues of the cumulative incidence function. Biometrics. 2005;61:223–229] based on the estimated cumulative incidence function and a regression coefficient is estimated through a multiple imputation. We evaluate the suggested method by comparing with a complete case analysis in several simulation settings.  相似文献   

12.
For capture–recapture models when covariates are subject to measurement errors and missing data, a set of estimating equations is constructed to estimate population size and relevant parameters. These estimating equations can be solved by an algorithm similar to the EM algorithm. The proposed method is also applicable to the situation when covariates with no measurement errors have missing data. Simulation studies are used to assess the performance of the proposed estimator. The estimator is also applied to a capture–recapture experiment on the bird species Prinia flaviventris in Hong Kong. The Canadian Journal of Statistics 37: 645–658; 2009 © 2009 Statistical Society of Canada  相似文献   

13.
Non-response (or missing data) is often encountered in large-scale surveys. To enable the behavioural analysis of these data sets, statistical treatments are commonly applied to complete or remove these data. However, the correctness of such procedures critically depends on the nature of the underlying missingness generation process. Clearly, the efficacy of applying either case deletion or imputation procedures rests on the unknown missingness generation mechanism. The contribution of this paper is twofold. The study is the first to propose a simple sequential method to attempt to identify the form of missingness. Second, the effectiveness of the tests is assessed by generating (experimentally) nine missing data sets by imposed MCAR, MAR and NMAR processes, with data removed.  相似文献   

14.
Abstract

A method is proposed for the estimation of missing data in analysis of covariance models. This is based on obtaining an estimate of the missing observation that minimizes the error sum of squares. Specific derivation of this estimate is carried out for the one-factor analysis of covariance, and numerical examples are given to show the nature of the estimates produced. Parameter estimates of the imputed data are then compared with those of the incomplete data.  相似文献   

15.
The variance of short-term systematic measurement errors for the difference of paired data is estimated. The difference of paired data is determined by subtracting the measurement results of two methods, which measure the same item only once without measurement repetition. The unbiased estimators for short-term systematic measurement error variances based on the one-way random effects model are not fit for practical purpose because they can be negative. The estimators, which are derived for balanced data as well as for unbalanced data, are always positive but biased. The basis of these positive estimators is the one-way random effects model. The biases, variances, and the mean squared errors of the positive estimators are derived as well as their estimators. The positive estimators are fit for practical purpose.  相似文献   

16.
Summary The objective of this analysis of variance of paired data is to estimate positive random error variances for each ofN=2 measurement methods. The two methods measure the same item only once without measurement repetition. The well-known unbiased Grubbs’ estimators are not suitable for practical purpose because they can become negative. With the help of Chebyshev’s inequality the probability was determined that Grubbs’ estimators become negative. Based on the Grubbs’ estimators new estimators were derived. The new estimators are indeed always positive, but they are biased. It is shown that the biases are small. In case the Grubbs’ estimators are positive a bias correction of the new estimators may be envisaged.
Zusammenfassung Das Ziel dieser Varianzanalyse von gepaarten Messungen ist die Sch?tzung zuf?lliger Messfehlervarianzen für jede derN=2 Messmethoden. Die beiden Messmethoden messen das gleiche Merkmal eines Elements nur einmal ohne Messwiederholung. Die bekannten unverzerrten Grubbs-Sch?tzer sind für die praktische Anwendung nicht geeignet, weil sie negativ werden k?nnten. Die Tschebyscheffsche Ungleichung wurde genutzt, um die Wahrscheinlichkeit zu ermitteln, dass Grubbs-Sch?tzer negativ werden. Basierend auf Grubbs-Sch?tzern wurden neue Sch?tzer hergeleitet. Diese neuen Sch?tzer sind zwar immer positiv, aber verzerrt. Es wird gezeigt, dass die Verzerrungen klein sind. Für den Fall, dass die Grubbs-Sch?tzer positiv ausfallen, k?nnte eine Korrektur der Verzerrung in Betracht gezogen werden.
  相似文献   

17.
For J ? 2 independent groups, the article deals with testing the global hypothesis that all J groups have a common population median or identical quantiles, with an emphasis on the quartiles. Classic rank-based methods are sometimes suggested for comparing medians, but it is well known that under general conditions they do not adequately address this goal. Extant methods based on the usual sample median are unsatisfactory when there are tied values except for the special case J = 2. A variation of the percentile bootstrap used in conjunction with the Harrell–Davis quantile estimator performs well in simulations. The method is illustrated with data from the Well Elderly 2 study.  相似文献   

18.
ABSTRACT

This paper is concerned with the problem of estimation for the mean of the selected population from two normal populations with unknown means and common known variance in a Bayesian framework. The empirical Bayes estimator, when there are available additional observations, is derived and its bias and risk function are computed. The expected bias and risk of the empirical Bayes estimator and the intuitive estimator are compared. It is shown that the empirical Bayes estimator is asymptotically optimal and especially dominates the intuitive estimator in terms of Bayes risk, with respect to any normal prior. Also, the Bayesian correlation between the mean of the selected population (random parameter) and some interested estimators are obtained and compared.  相似文献   

19.
20.
This work presents a new method to deal with missing values in financial time series. Previous works are generally based in state-space models and Kalman filter and few consider ARCH family models. The traditional approach is to bound the data together and perform the estimation without considering the presence of missing values. The existing methods generally consider missing values in the returns. The proposed method considers the presence of missing values in the price of the assets instead of in the returns. The performance of the method in estimating the parameters and the volatilities is evaluated through a Monte Carlo simulation. Value at risk is also considered in the simulation. An empirical application to NASDAQ 100 Index series is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号