首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Identifying the risk factors for comorbidity is important in psychiatric research. Empirically, studies have shown that testing multiple, correlated traits simultaneously is more powerful than testing a single trait at a time in association analysis. Furthermore, for complex diseases, especially mental illnesses and behavioral disorders, the traits are often recorded in different scales such as dichotomous, ordinal and quantitative. In the absence of covariates, nonparametric association tests have been developed for multiple complex traits to study comorbidity. However, genetic studies generally contain measurements of some covariates that may affect the relationship between the risk factors of major interest (such as genes) and the outcomes. While it is relatively easy to adjust these covariates in a parametric model for quantitative traits, it is challenging for multiple complex traits with possibly different scales. In this article, we propose a nonparametric test for multiple complex traits that can adjust for covariate effects. The test aims to achieve an optimal scheme of adjustment by using a maximum statistic calculated from multiple adjusted test statistics. We derive the asymptotic null distribution of the maximum test statistic, and also propose a resampling approach, both of which can be used to assess the significance of our test. Simulations are conducted to compare the type I error and power of the nonparametric adjusted test to the unadjusted test and other existing adjusted tests. The empirical results suggest that our proposed test increases the power through adjustment for covariates when there exist environmental effects, and is more robust to model misspecifications than some existing parametric adjusted tests. We further demonstrate the advantage of our test by analyzing a data set on genetics of alcoholism.  相似文献   

2.
Abstract

In the area of goodness-of-fit there is a clear distinction between the problem of testing the fit of a continuous distribution and that of testing a discrete distribution. In all continuous problems the data is recorded with a limited number of decimals, so in theory one could say that the problem is always of a discrete nature, but it is a common practice to ignore discretization and proceed as if the data is continuous. It is therefore an interesting question whether in a given problem of test of fit, the “limited resolution” in the observed recorded values may be or may be not of concern, if the analysis done ignores this implied discretization. In this article, we address the problem of testing the fit of a continuous distribution with data recorded with a limited resolution. A measure for the degree of discretization is proposed which involves the size of the rounding interval, the dispersion in the underlying distribution and the sample size. This measure is shown to be a key characteristic which allows comparison, in different problems, of the amount of discretization involved. Some asymptotic results are given for the distribution of the EDF (empirical distribution function) statistics that explicitly depend on the above mentioned measure of degree of discretization. The results obtained are illustrated with some simulations for testing normality when the parameters are known and also when they are unknown. The asymptotic distributions are shown to be an accurate approximation for the true finite n distribution obtained by Monte Carlo. A real example from image analysis is also discussed. The conclusion drawn is that in the cases where the value of the measure for the degree of discretization is not “large”, the practice of ignoring discreteness is of no concern. However, when this value is “large”, the effect of ignoring discreteness leads to an exceded number of rejections of the distribution tested, as compared to what would be the number of rejections if no rounding is taking into account. The error made in the number of rejections might be huge.  相似文献   

3.
This article considers Bayesian inference, posterior and predictive, in the context of a start-up demonstration test procedure in which rejection of a unit occurs when a pre-specified number of failures is observed prior to obtaining the number of consecutive successes required for acceptance. The method developed for implementing Bayesian inference in this article is a Markov chain Monte Carlo (MCMC) method incorporating data augmentation. This method permits the analysis to go forth, even when the results of the start-up test procedure are not completely recorded or reported. An illustrative example is included.  相似文献   

4.
Recently, Perron has carried out tests of the unit-root hypothesis against the alternative hypothesis of trend stationarity with a break in the trend occurring at the Great Crash of 1929 or at the 1973 oil-price shock. His analysis covers the Nelson–Plosser macroeconomic data series as well as a postwar quarterly real gross national product (GNP) series. His tests reject the unit-root null hypothesis for most of the series. This article takes issue with the assumption used by Perron that the Great Crash and the oil-price shock can be treated as exogenous events. A variation of Perron's test is considered in which the breakpoint is estimated rather than fixed. We argue that this test is more appropriate than Perron's because it circumvents the problem of data-mining. The asymptotic distribution of the estimated breakpoint test statistic is determined. The data series considered by Perron are reanalyzed using this test statistic. The empirical results make use of the asymptotics developed for the test statistic as well as extensive finite-sample corrections obtained by simulation. The effect on the empirical results of fat-tailed and temporally dependent innovations is investigated, in brief, by treating the breakpoint as endogenous, we find that there is less evidence against the unit-root hypothesis than Perron finds for many of the data series but stronger evidence against it for several of the series, including the Nelson-Plosser industrial-production, nominal-GNP, and real-GNP series.  相似文献   

5.
The authors consider the linear model Yn = ψXn + ?n relating a functional response with explanatory variables. They propose a simple test of the nullity of ψ based on the principal component decomposition. The limiting distribution of their test statistic is chi‐squared, but this distribution is also an excellent approximation in finite samples. The authors illustrate their method using data from terrestrial magnetic observatories.  相似文献   

6.
We consider the distribution of the turning point location of time series modeled as the sum of deterministic trend plus random noise. If the variables are modeled by shifted exponentials, whose location parameters define the trend, we provide a formula for computing the distribution of the turning point location and consequently to estimate a confidence interval for the location. We test this formula in simulated data series having a trend with asymmetric minimum, investigating the coverage rate as a function of a bandwidth parameter. The method is applied to estimate the confidence interval of the minimum location of two types of real-time series: the RT intervals extracted from the electrocardiogram recorded during the exercise test and an economic indicator, the current account balance. We discuss the connection with stochastic ordering.  相似文献   

7.
Relative poverty lines are increasingly being used in poverty comparison studies. Existing methods assume that the distributions being compared are distinct with independent relative poverty lines. However, this practice may be problematic when comparing two subgroups of a population. We follow up on a recent proposal for the usage of common relative poverty lines in such cases, and develop a test for comparing poverty between subgroups of a single population, using inequality restrictions. Monte Carlo experiments are conducted in order to examine the size and power of our proposed test. We illustrate our procedure using some U.S. household income data.  相似文献   

8.
After reading a few articles in the nonlinear econonetric literature one begins to notice that each discussion follows roughly the same lines as the classical treatment of maximum likelihood estimation. There are some technical problems having to do with simultaneously conditioning on the exogenous variables and subjecting the true parameter to a Pittman drift which prevent the use of the classical methods of proof but the basic impression of similarity is correct . An estimator – be it nonlinear least squares, three – stage nonlinear least squares, or whatever – is the solution of an optimization problem. And the objective function of the optimization problem can be treated as if it were the likelihood to derive the Wald test statistic, the likelihood ratio test statistic , and Rao's efficient score statistic. Their asymptotic null and non – null distributions can be found using arguments fairly similar to the classical maximum likelihood arguments. In this article we exploit these observations and unify much of the nonlinear econometric literature. That which escapes this unificationis that which has an objective function which is not twice continuously differentiable with respect to the parameters – minimum absolute deviations regression for example.

The model which generates the data need not bethe same as the model which was presumed to define the optimization problem. Thus, these results can be used to obtain the asymptotic behavior of inference procedures under specification error We think that this will prove to be the nost useful feature of the paper. For example, it i s not necessary toresortto Monte Carlo simulat ionto determine i f a Translog estimate of an elasticity of sub stitution obtained by nonlinear three-stage least squares is robust against a CES truestate of nature. The asymptotic approximations we give here w ill provide an analytic answer to the question, sufficiently accurate for most purposes.  相似文献   

9.
Summary.  To investigate the variability in energy output from a network of photovoltaic cells, solar radiation was recorded at 10 sites every 10 min in the Pentland Hills to the south of Edinburgh. We identify spatiotemporal auto-regressive moving average models as the most appropriate to address this problem. Although previously considered computationally prohibitive to work with, we show that by approximating using toroidal space and fitting by matching auto-correlations, calculations can be substantially reduced. We find that a first-order spatiotemporal auto-regressive (STAR(1)) process with a first-order neighbourhood structure and a Matern noise process provide an adequate fit to the data, and we demonstrate its use in simulating realizations of energy output.  相似文献   

10.
Estimators of the intercept parameter of a simple linear regression model involves the slope estimator. In this article, we consider the estimation of the intercept parameters of two linear regression models with normal errors, when it is a priori suspected that the two regression lines are parallel, but in doubt. We also introduce a coefficient of distrust as a measure of degree of lack of trust on the uncertain prior information regarding the equality of two slopes. Three different estimators of the intercept parameters are defined by using the sample data, the non sample uncertain prior information, an appropriate test statistic, and the coefficient of distrust. The relative performances of the unrestricted, shrinkage restricted and shrinkage preliminary test estimators are investigated based on the analyses of the bias and risk functions under quadratic loss. If the prior information is precise and the coefficient of distrust is small, the shrinkage preliminary test estimator overperforms the other estimators. An example based on a medical study is used to illustrate the method.  相似文献   

11.
This article presents a multiple hypothesis test procedure that combines two well known tests for structural change in the linear regression model, the CUSUM test and the recursive t test. The CUSUM test is run through the sequence of recursive residuals as usual; if the CUSUM plot does not violate the critical lines, one more step is taken to perform the t test for hypothesis of zero mean based on all recursive residuals. The asymptotic size of this multiple hypothesis test is derived; power simulation results suggest that it outperforms the traditional CUSUM test and complements other tests that are currently stressed in econometrics.  相似文献   

12.
It is crucial to test the goodness of fit of a model before it is used to make statistical inferences. However, no satisfactory goodness of fit test is available for the case of categorical multilevel data which occur when categorical data are clustered or hierarchical in nature. Hence the aim of this paper is to develop a new goodness of fit test for multilevel binary data based on Hosmer and Lemeshow and Lipsitz et.al. In order to identify the properties of the developed test, simulation studies were carried out to assess the Type I error and the power.  相似文献   

13.
When faced with the problem of goodness-of-fit to the Lognormal distribution, testing methods typically reduce to comparing the empirical distribution function of the corresponding logarithmic data to that of the normal distribution. In this article, we consider a family of test statistics which make use of the moment structure of the Lognormal law. In particular, a continuum of moment conditions is employed in the construction of a new statistic for this distribution. The proposed test is shown to be consistent against fixed alternatives, and a simulation study shows that it is more powerful than several classical procedures, including those utilizing the empirical distribution function. We conclude by applying the proposed method to some, not so typical, data sets.  相似文献   

14.
This article presents a multiple hypothesis test procedure that combines two well known tests for structural change in the linear regression model, the CUSUM test and the recursive t test. The CUSUM test is run through the sequence of recursive residuals as usual; if the CUSUM plot does not violate the critical lines, one more step is taken to perform the t test for hypothesis of zero mean based on all recursive residuals. The asymptotic size of this multiple hypothesis test is derived; power simulation results suggest that it outperforms the traditional CUSUM test and complements other tests that are currently stressed in econometrics.  相似文献   

15.
Abstract

Solar radiation is a global ecological phenomenon that affects life everywhere. In this study, a new statistical method, called the Quartiles-Moment's method, is proposed to estimate the scale and shape parameters of the exponentiated Gumbel maximum distribution (EGMD). The Kolomogorov–Smirnov test and the percentiles of the dataset are thus used to fit the dataset of the daily global solar radiation and the corresponding daily maximum temperature with EGMD. Thence, multiple nonlinear regression of the daily global solar radiation and the corresponding daily maximum temperature are produced and compared with the real dataset accordingly.  相似文献   

16.
In this article, we model functional magnetic resonance imaging (fMRI) data for event-related experiment data using a fourth degree spline to fit voxel specific blood oxygenation level-dependent (BOLD) responses. The data are preprocessed for removing long term temporal components such as drifts using wavelet approximations. The spatial dependence is incorporated in the data by the application of 3D Gaussian spatial filter. The methodology assigns an activation score to each trial based on the voxel specific characteristics of the response curve. The proposed procedure has a capability of being fully automated and it produces activation images based on overall scores assigned to each voxel. The methodology is illustrated on real data from an event-related design experiment of visually guided saccades (VGS).  相似文献   

17.
In a breakthrough paper, Benjamini and Hochberg (J Roy Stat Soc Ser B 57:289–300, 1995) proposed a new error measure for multiple testing, the FDR; and developed a distribution-free procedure to control it under independence among the test statistics. In this paper we argue by extensive simulation and theoretical considerations that the assumption of independence is not needed. Along the lines of (Ann Stat 32:1035–1061, 2004b), we moreover provide a more powerful method, that exploits an estimator of the number of false nulls among the tests. We propose a whole family of iterative estimators that prove robust under dependence and independence between the test statistics. These estimators can be used to improve also classical multiple testing procedures, and in general to estimate the weight of a known component in a mixture distribution. Innovations are illustrated by simulations.  相似文献   

18.
This paper is concerned with Bayesian estimation and prediction in the context of start-up demonstration tests in which rejection of a unit is possible when a pre-specified number of failures is observed prior to obtaining the number of consecutive successes required for acceptance of the unit. A method for implementing Bayesian inference on the probability of success is developed for use when the test result of each start-up is not reported or even recorded, and only the number of trials until termination of the testing is available. Some errors in the related literature on the Bayesian analysis of start-up demonstration tests are corrected. The method developed in this paper is a Markov chain Monte Carlo (MCMC) method incorporating data augmentation, and it additionally enables Bayesian posterior inference on the number of failures given the number of start-up trials until termination to be made, along with Bayesian predictive inferences on the number of start-up trials and the number of failures until termination for any future run of the start-up demonstration test. An illustrative example is also included.  相似文献   

19.
In this paper, we propose and study a new global test, namely, GPF test, for the one‐way anova problem for functional data, obtained via globalizing the usual pointwise F‐test. The asymptotic random expressions of the test statistic are derived, and its asymptotic power is investigated. The GPF test is shown to be root‐n consistent. It is much less computationally intensive than a parametric bootstrap test proposed in the literature for the one‐way anova for functional data. Via some simulation studies, it is found that in terms of size‐controlling and power, the GPF test is comparable with two existing tests adopted for the one‐way anova problem for functional data. A real data example illustrates the GPF test.  相似文献   

20.
Two methods for testing the equality of variances in straight lines regression with a change point are considered. One is likelihood ratio test and the other is Bayesian confidence interval, based on the highest posterior density for the ratio of variances, using non-informative priors. Methods are applied to the renal transplant data analyzed by Smith and Cook(1980) and Stephens(1994).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号