首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

2.
ABSTRACT

Recent literature has proposed a test for exponentiality based on sample entropy. We consider transformations of the observations which turn the test of exponentiality into one of uniformity and use a corresponding test based on entropy. The test based on the transformed variables performs better in many cases of interest.  相似文献   

3.

Evolutionary algorithms are heuristic stochastic search and optimization techniques with principles taken from natural genetics. They are procedures mimicking the evolution process of an initial population through genetic transformations. This paper is concerned with the problem of finding A-optimal incomplete block designs for multiple treatment comparisons represented by a matrix of contrasts. An evolutionary algorithm for searching optimal, or nearly optimal, incomplete block designs is described in detail. Various examples regarding the application of the algorithm to some well-known problems illustrate the good performance of the algorithm  相似文献   

4.

In this article we examine the effect that logarithmic and power transformations have on the order of integration in raw time series. For this purpose, we use a version of the tests of Robinson (1994) that permits us to test I ( d ) statistical models. The results, obtained via Monte Carlo, show that there is no effect in the degree of dependence of the series when this type of transformations are employed, resulting thus in useful mechanisms to be applied when a more plausible economic interpretation of the data is required.  相似文献   

5.
ABSTRACT

Alternatives for positively skewed and heteroscedastic data include the Yuen-Welch (YW) test, data transformations, and the generalized linear model (GzLM). Because the GzLM is rarely considered in psychology compared to the other two, we compared these strategies conceptually and empirically. The YW test generally has satisfactory power, but its trimmed mean can deviate substantially from the arithmetic mean, which is often the desired parameter. The gamma GzLM can be used as a substitute for the log transformation and addresses the limitations in inference for the YW and data transformations.  相似文献   

6.
Abstract

This paper is concerned with independence test in high-dimension. A new test statistic is proposed with two terms: one is based on the modified distance correlation statistic, the other is constructed to enhance the power under sparse alternatives. Asymptotic properties of the test statistic are discussed under some regular conditions. The finite-sample simulations exhibit its superiority over some existing procedures. Finally, a real data example illustrates the proposed test.  相似文献   

7.
Abstract

Fourier methods are proposed for testing the distribution of random effects in classical and robust multivariate mixed effects models. The test statistics involve estimation of the characteristic function of random effects. Theoretical and computational issues are addressed while Monte Carlo results show that the new procedures compare favorably with other methods.  相似文献   

8.

A test for exponentiality based on progressively Type-II right censored spacings has been proposed recently by Balakrishnan et al. (2002). They derived the asymptotic null distribution of the test statistic. In this work, we utilize the algorithm of Huffer and Lin (2001) to evaluate the exact null probabilities and the exact critical values of this test statistic.  相似文献   

9.
Abstract

In this study, we discuss multiple comparison procedures for finding normal means which are not maximum among several normal means. Specifically, we propose the single step procedure, the sequentially rejective step down procedure and the step up procedure. For the single step procedure we determine the critical value for a specified significance level. For the sequentially rejective step down procedure and the step up procedure we determine the critical value at each step of the test for a specified significance level. For three procedures we formulate the power of the test under a specified alternative hypothesis. We give some numerical examples regarding critical values and power of the test intended to compare three procedures.  相似文献   

10.
Abstract

A wide class of Yule distribution is introduced here as a generalization of the extended Yule distribution of Martinez-Rodríguez et al. and the generalized Yule distribution of Mishra and investigate some of its properties. Various methods of estimation are employed for estimating the parameters of the distribution and generalized likelihood ratio test procedures are suggested for testing the significance of the parameters of the distribution. All these procedures are illustrated with the help of real data sets.  相似文献   

11.
Abstract

This paper investigates the parameter-change tests for a class of observation-driven models for count time series. We propose two cumulative sum (CUSUM) test procedures for detection of changes in model parameters. Under regularity conditions, the asymptotic null distributions of the test statistics are established. In addition, the integer-valued generalized autoregressive conditional heteroskedastic (INGARCH) processes with conditional negative binomial distributions are investigated. The developed techniques are examined through simulation studies and also are illustrated using an empirical example.  相似文献   

12.
ABSTRACT

Transformation of the response is a popular method to meet the usual assumptions of statistical methods based on linear models such as ANOVA and t-test. In this paper, we introduce new families of transformations for proportions or percentage data. Most of the transformations for proportions require 0 < x < 1 (where x denotes the proportion), which is often not the case in real data. The proposed families of transformations allow x = 0 and x = 1. We study the properties of the proposed transformations, as well as the performance in achieving normality and homoscedasticity. We analyze three real data sets to empirically show how the new transformation performs in meeting the usual assumptions. A simulation study is also performed to study the behavior of new families of transformations.  相似文献   

13.
ABSTRACT

Canonical correlations are maximized correlation coefficients indicating the relationships between pairs of canonical variates that are linear combinations of the two sets of original variables. The number of non-zero canonical correlations in a population is called its dimensionality. Parallel analysis (PA) is an empirical method for determining the number of principal components or factors that should be retained in factor analysis. An example is given to illustrate for adapting proposed procedures based on PA and bootstrap modified PA to the context of canonical correlation analysis (CCA). The performances of the proposed procedures are evaluated in a simulation study by their comparison with traditional sequential test procedures with respect to the under-, correct- and over-determination of dimensionality in CCA.  相似文献   

14.
《Statistics》2012,46(6):1306-1328
ABSTRACT

In this paper, we consider testing the homogeneity of risk differences in independent binomial distributions especially when data are sparse. We point out some drawback of existing tests in either controlling a nominal size or obtaining powers through theoretical and numerical studies. The proposed test is designed to avoid the drawbacks of existing tests. We present the asymptotic null distribution and asymptotic power function for the proposed test. We also provide numerical studies including simulations and real data examples showing the proposed test has reliable results compared to existing testing procedures.  相似文献   

15.
Abstract

In order to save more test cost, assurance test and its equivalent truncated sequential test are studied. In a commonly used case, the operating characteristic (OC) function and expected test time (ETT) function of an assurance test are derived in a concise way. Equivalent test and relative concepts are defined. The procedures to construct a near equivalent truncated sequential test of an assurance test are established. Computation studies show that the near equivalent truncated sequential tests proposed in this paper keep almost the same OC curves with the assurance tests respectively. However, they can save the ETTs dramatically. In fact, the results show that the near equivalent truncated sequential tests can save around 50% of ETTs than the assurance tests respectively.  相似文献   

16.
Abstract

In this paper we present several goodness-of-fit tests for the centralized Wishart process, a popular matrix-variate time series model used to capture the stochastic properties of realized covariance matrices. The new test procedures are based on the extended Bartlett decomposition derived from the properties of the Wishart distribution and allows to obtain sets of independently and standard normally distributed random variables under the null hypothesis. Several tests for normality and independence are then applied to these variables in order to support or to reject the underlying assumption of a centralized Wishart process. In order to investigate the influence of estimated parameters on the suggested testing procedures in the finite-sample case, a simulation study is conducted. Finally, the new test methods are applied to real data consisting of realized covariance matrices computed for the returns on six assets traded on the New York Stock Exchange.  相似文献   

17.
Abstract

The homogeneity hypothesis is investigated in a location family of distributions. A moment-based test is introduced based on data collected from a ranked set sampling scheme. The asymptotic distribution of the proposed test statistic is determined and the performance of the test is studied via simulation. Furthermore, for small sample sizes, the bootstrap procedure is used to distinguish the homogeneity of data. An illustrative example is also presented to explain the proposed procedures in this paper.  相似文献   

18.
In this article, we consider the problem of testing for variance breaks in time series in the presence of a changing trend. In performing the test, we employ the cumulative sum of squares (CUSSQ) test introduced by Inclán and Tiao (1994, J.?Amer.?Statist.?Assoc., 89, 913 ? 923). It is shown that CUSSQ test is not robust in the case of broken trend and its asymptotic distribution does not convergence to the sup of a standard Brownian bridge. As a remedy, a bootstrap approximation method is designed to alleviate the size distortions of test statistic while preserving its high power. Via a bootstrap functional central limit theorem, the consistency of these bootstrap procedures is established under general assumptions. Simulation results are provided for illustration and an empirical example of application to a set of high frequency real data is given.  相似文献   

19.
Panel Data单位根和协整分析   总被引:10,自引:0,他引:10       下载免费PDF全文
一、PanelData的含义PanelData (或者LongitudinalData)可译成“板面数据” ,是用来描述一个总体中给定样本在一段时间的情况。通过对样本中每一个样本单位进行多重观察 ,得到的一个数据集。这种多重观察既包括对样本单位在某一时期 (时点 )上多个特性的观察 ,也包括对样本单位的这些特性在一段时间上的连续观察。在宏观经济领域 ,它被广泛应用于经济增长、产业结构、技术创新、金融、税收政策等领域 ;在微观经济领域 ,它被大量应用于就业、家庭消费、入学、市场营销等领域。从 1990年到目前为止 ,已有近…  相似文献   

20.

Suppose that an order restriction is imposed among several p-variate normal mean vectors. We are interested in the problems of estimating these mean vectors and testing their homogeneity under this restriction. These problems are multivariate extensions of Bartholomew's (1959) ones. For the bivariate case, these problems have been studied by Sasabuchi et al. (1983) and (1998) and some others. In the present paper we examine the convergence of an iterative algorithm for computing the maximum likelihood estimator when p is larger than two. We also study some test procedures for testing homogeneity when p is larger than two.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号