首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The muitivariate nonparametric tests analogous to the univar-iate rank sum test and median test are contained in Puri and Sen (1970). These tests provided a practical alternative for the analysis of multivariate data when the assumptions of parametric methods are not satisfied.

In this paper maximum values for LNthe asymptotic chi-Square test statistic for both the Multivariate Multisample Rank Sum Test (MMRST) and the Multivariate Multisample Median Test (MMMT) are developed.  相似文献   

2.
In ophthalmologic or otolaryngologic study, each subject may contribute paired organs measurements to the analysis. A number of statistical methods have been proposed on bilateral correlated data. In practice, it is important to detect confounding effect by treatment interaction, since ignoring confounding effect may lead to unreliable conclusion. Therefore, stratified data analysis can be considered to adjust the effect of confounder on statistical inference. In this article, we investigate and derive three test procedures for testing homogeneity of difference of two proportions for stratified correlated paired binary data in the basis of equal correlation model assumption. The performance of proposed test procedures is examined through Monte Carlo simulation. The simulation results show that the Score test is usually robust on type I error control with high power, and therefore is recommended among the three methods. One example from otolaryngologic study is given to illustrate the three test procedures.  相似文献   

3.
Small sample tables are not available for the multisample multivariate rank sum test (MMRST) or the multisample multivariate median test (MMMT) LN statistic. Consequently, the statistic usually is compared to its asymptotic Chi-square value. To investigate the appropriateness of this procedure a Monte Carlo study is used to measure both significance level and relative power for a variety of multivariate dispersion structures.  相似文献   

4.
Using simulation techniques, the null distribution properties of seven hypothesis testing procedures and a comparison of their powers are investigated for incomplete-data small-sample growth curve situations. The testing procedures are a combination of two growth curve models (the Potthoff and Roy model for complete data and Kleinbaum's extention to incomplete data) and three estimation techniques (two involving means of existing observations and the other using the EM algorithm) plus an analysis of a subset of complete data. All of the seven tests use the Kleinbaum Wald statistic, but different tests use different information. The hypotheses of identical and parallel growth curves are tested under the assumptions of multivariate normality and a linear polynomial mean growth curve for each of two groups. Good approximate null distributions are found for all procedures and one procedure is identified as empirically most powerful for the situations investigated.  相似文献   

5.
Real-time polymerase chain reaction (PCR) is reliable quantitative technique in gene expression studies. The statistical analysis of real-time PCR data is quite crucial for results analysis and explanation. The statistical procedures of analyzing real-time PCR data try to determine the slope of regression line and calculate the reaction efficiency. Applications of mathematical functions have been used to calculate the target gene relative to the reference gene(s). Moreover, these statistical techniques compare Ct (threshold cycle) numbers between control and treatments group. There are many different procedures in SAS for real-time PCR data evaluation. In this study, the efficiency of calibrated model and delta delta Ct model have been statistically tested and explained. Several methods were tested to compare control with treatment means of Ct. The methods tested included t-test (parametric test), Wilcoxon test (non-parametric test) and multiple regression. Results showed that applied methods led to similar results and no significant difference was observed between results of gene expression measurement by the relative method.  相似文献   

6.
Abstract.  Multivariate failure time data arises when each study subject can potentially ex-perience several types of failures or recurrences of a certain phenomenon, or when failure times are sampled in clusters. We formulate the marginal distributions of such multivariate data with semiparametric accelerated failure time models (i.e. linear regression models for log-transformed failure times with arbitrary error distributions) while leaving the dependence structures for related failure times completely unspecified. We develop rank-based monotone estimating functions for the regression parameters of these marginal models based on right-censored observations. The estimating equations can be easily solved via linear programming. The resultant estimators are consistent and asymptotically normal. The limiting covariance matrices can be readily estimated by a novel resampling approach, which does not involve non-parametric density estimation or evaluation of numerical derivatives. The proposed estimators represent consistent roots to the potentially non-monotone estimating equations based on weighted log-rank statistics. Simulation studies show that the new inference procedures perform well in small samples. Illustrations with real medical data are provided.  相似文献   

7.
Multivariate panel count data often occur when there exist several related recurrent events or response variables defined by occurrences of related events. For univariate panel count data, several nonparametric treatment comparison procedures have been developed. However, it does not seem to exist a nonparametric procedure for multivariate cases. Based on differences between estimated mean functions, this article proposes a class of nonparametric test procedures for multivariate panel count data. The asymptotic distribution of the new test statistics is established and a simulation study is conducted. Moreover, the new procedures are applied to a skin cancer problem that motivated this study.  相似文献   

8.
Interval-censored data are very common in the reliability and lifetime data analysis. This paper investigates the performance of different estimation procedures for a special type of interval-censored data, i.e. grouped data, from three widely used lifetime distributions. The approaches considered here include the maximum likelihood estimation, the minimum distance estimation based on chi-square criterion, the moment estimation based on imputation (IM) method and an ad hoc estimation procedure. Although IM-based techniques are extensively used recently, we show that this method is not always effective. It is found that the ad hoc estimation procedure is equivalent to the minimum distance estimation with another distance metric and more effective in the simulation. The procedures of different approaches are presented and their performances are investigated by Monte Carlo simulation for various combinations of sample sizes and parameter settings. The numerical results provide guidelines to analyse grouped data for practitioners when they need to choose a good estimation approach.  相似文献   

9.
An imputation procedure is a procedure by which each missing value in a data set is replaced (imputed) by an observed value using a predetermined resampling procedure. The distribution of a statistic computed from a data set consisting of observed and imputed values, called a completed data set, is affecwd by the imputation procedure used. In a Monte Carlo experiment, three imputation procedures are compared with respect to the empirical behavior of the goodness-of- fit chi-square statistic computed from a completed data set. The results show that each imputation procedure affects the distribution of the goodness-of-fit chi-square statistic in 3. different manner. However, when the empirical behavior of the goodness-of-fit chi-square statistic is compared u, its appropriate asymptotic distribution, there are no substantial differences between these imputation procedures.  相似文献   

10.
In this paper, three analysis procedures for repeated correlated binary data with no a priori ordering of the measurements are described and subsequently investigated. Examples for correlated binary data could be the binary assessments of subjects obtained by several raters in the framework of a clinical trial. This topic is especially of relevance when success criteria have to be defined for dedicated imaging trials involving several raters conducted for regulatory purposes. First, an analytical result on the expectation of the ‘Majority rater’ is presented when only the marginal distributions of the single raters are given. The paper provides a simulation study where all three analysis procedures are compared for a particular setting. It turns out that in many cases, ‘Average rater’ is associated with a gain in power. Settings were identified where ‘Majority significant’ has favorable properties. ‘Majority rater’ is in many cases difficult to interpret. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Wald and Wolfowitz (1948) have shown that the Sequential Probability Ratio Test (SPRT) for deciding between two simple hypotheses is, under very restrictive conditions, optimal in three attractive senses. First, it can be a Bayes-optimal rule. Second, of all level α tests having the same power, the test with the smallest joint-expected number of observations is the SPRT, where this expectation is taken jointly with respect to both data and prior over the two hypotheses. Third, the level α test needing the fewest conditional-expected number of observat ions is the SPRT, where this expectation is now taken with respect to the data conditional on either hypothesis being true. Principal among the strong restrictions is that sampling can proceed only in a one-at-a-time manner. In this paper, we relax some of the conditions and show that there are sequential procedures that strictly dominate the SPRT in all three senses. We conclude that the third type of optimality occurs rarely and that decision-makers are better served by looking for sequential procedures that possess the first two types of optimality. By relaxing the one-at-a-time sampling restriction, we obtain optimal (in the first two senses) variable-s ample-size- sequential probability ratio tests.  相似文献   

12.
The six recommendations made by the Guidelines for Assessment and Instruction in Statistics Education (GAISE) committee were first communicated in 2005 and more formally in 2010. In this article, 25 introductory statistics textbooks are examined to assess how well these textbooks have incorporated the three GAISE recommendations most relevant to implementation in textbooks (statistical literacy and thinking; use of real data; stress concepts over procedures). The implementation of another recommendation (using technology) is described but not assessed. In general, most textbooks appear to be adopting the GAISE recommendations reasonably well in both exposition and exercises. The textbooks are particularly adept at using real data, using real data well, and promoting statistical literacy. Textbooks are less adept—but still rated reasonably well, in general—at explaining concepts over procedures and promoting statistical thinking. In contrast, few textbooks have easy-usable glossaries of statistical terms to assist with understanding of statistical language and literacy development. Supplementary materials for this article are available online.  相似文献   

13.
Abstract

In this paper, we perform the analysis of the SUR Tobit model for three left-censored dependent variables by modeling its nonlinear dependence structure through the one-parameter Clayton copula. For unbiased parameter estimation, we propose an extension of the Inference Function for Augmented Margins (IFAM) method to the trivariate case. The interval estimation for the model parameters using resampling procedures is also discussed. We perform simulation and empirical studies, whose satisfactory results indicate the good performance of the proposed model and methods. Our procedure is illustrated using real data on consumption of food items (salad dressings, lettuce, tomato) by Americans.  相似文献   

14.
The analysis of incomplete contingency tables is a practical and an interesting problem. In this paper, we provide characterizations for the various missing mechanisms of a variable in terms of response and non-response odds for two and three dimensional incomplete tables. Log-linear parametrization and some distinctive properties of the missing data models for the above tables are discussed. All possible cases in which data on one, two or all variables may be missing are considered. We study the missingness of each variable in a model, which is more insightful for analyzing cross-classified data than the missingness of the outcome vector. For sensitivity analysis of the incomplete tables, we propose easily verifiable procedures to evaluate the missing at random (MAR), missing completely at random (MCAR) and not missing at random (NMAR) assumptions of the missing data models. These methods depend only on joint and marginal odds computed from fully and partially observed counts in the tables, respectively. Finally, some real-life datasets are analyzed to illustrate our results, which are confirmed based on simulation studies.  相似文献   

15.
Summary.  Principal component analysis has become a fundamental tool of functional data analysis. It represents the functional data as X i ( t )= μ ( t )+Σ1≤ l <∞ η i ,  l +  v l ( t ), where μ is the common mean, v l are the eigenfunctions of the covariance operator and the η i ,  l are the scores. Inferential procedures assume that the mean function μ ( t ) is the same for all values of i . If, in fact, the observations do not come from one population, but rather their mean changes at some point(s), the results of principal component analysis are confounded by the change(s). It is therefore important to develop a methodology to test the assumption of a common functional mean. We develop such a test using quantities which can be readily computed in the R package fda. The null distribution of the test statistic is asymptotically pivotal with a well-known asymptotic distribution. The asymptotic test has excellent finite sample performance. Its application is illustrated on temperature data from England.  相似文献   

16.
Test and estimation procedures for detecting a change in the mean are proposed in infinite moving average long memory time series models. The asymptotic properties of the test statistics and the change-point estimators are investigated. The method is illustrated through the analysis of real data sets from econometrics and climatology.  相似文献   

17.
Multivariate analysis is difficult when there are missing observations in the response vectors. Kleinbaum (1973) proposed a Wald statistic useful in the analysis of incomplete multivariate data. SUBROUTINE C0EF calculates the estimated parameter matrix g in the generalization of the Potthoff-Roy (1964) growth curve model proposed by Kleinbaum (1973). SUBROUTINE WALD calculates the Wald statistic for hypotheses of the form Hn: H 5 D = 0 as proposed by Kleinbaum (1973).  相似文献   

18.
A set of three goodness-of-fit procedures is proposed to investigate the adequacy of fit of Fisher's distribution on the sphere as a model for a given sample of spherical data. The procedures are all based on standard tests using the empirical distribution function.  相似文献   

19.
The technique of surrogate data analysis may be employed to test the hypothesis that an observed data set was generated by one of several specific classes of dynamical system. Current algorithms for surrogate data analysis enable one, in a generic way, to test for membership of the following three classes of dynamical system: (0) independent and identically distributed noise, (1) linearly filtered noise, and (2) a monotonic nonlinear transformation of linearly filtered noise.We show that one may apply statistics from nonlinear dynamical systems theory, in particular those derived from the correlation integral, as test statistics for the hypothesis that an observed time series is consistent with each of these three linear classes of dynamical system. Using statistics based on the correlation integral we show that it is also possible to test much broader (and not necessarily linear) hypotheses.We illustrate these methods with radial basis models and an algorithm to estimate the correlation dimension. By exploiting some special properties of this correlation dimension estimation algorithm we are able to test very specific hypotheses. Using these techniques we demonstrate the respiratory control of human infants exhibits a quasi-periodic orbit (the obvious inspiratory/expiratory cycle) together with cyclic amplitude modulation. This cyclic amplitude modulation manifests as a stable focus in the first return map (equivalently, the sequence of successive peaks).  相似文献   

20.
Consider k independent random samples such that ith sample is drawn from a two-parameter exponential population with location parameter μi and scale parameter θi,?i = 1, …, k. For simultaneously testing differences between location parameters of successive exponential populations, closed testing procedures are proposed separately for the following cases (i) when scale parameters are unknown and equal and (ii) when scale parameters are unknown and unequal. Critical constants required for the proposed procedures are obtained numerically and the selected values of the critical constants are tabulated. Simulation study revealed that the proposed procedures has better ability to detect the significant differences and has more power in comparison to exiting procedures. The illustration of the proposed procedures is given using real data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号