首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In modern statistical practice, it is increasingly common to observe a set of curves or images, often measured with noise, and to use these as the basis of analysis (functional data analysis). We consider a functional data model consisting of measurement error and functional random effects motivated by data from a study of human vision. By transforming the data into the wavelet domain we are able to exploit the expected sparse representation of the underlying function and the mechanism generating the random effects. We propose simple fitting procedures and illustrate the methods on the vision data.  相似文献   

2.
We consider the problem of testing for additivity and joint effects in multivariate nonparametric regression when the data are modelled as observations of an unknown response function observed on a d-dimensional (d 2) lattice and contaminated with additive Gaussian noise. We propose tests for additivity and joint effects, appropriate for both homogeneous and inhomogeneous response functions, using the particular structure of the data expanded in tensor product Fourier or wavelet bases studied recently by Amato and Antoniadis (2001) and Amato, Antoniadis and De Feis (2002). The corresponding tests are constructed by applying the adaptive Neyman truncation and wavelet thresholding procedures of Fan (1996), for testing a high-dimensional Gaussian mean, to the resulting empirical Fourier and wavelet coefficients. As a consequence, asymptotic normality of the proposed test statistics under the null hypothesis and lower bounds of the corresponding powers under a specific alternative are derived. We use several simulated examples to illustrate the performance of the proposed tests, and we make comparisons with other tests available in the literature.  相似文献   

3.
Estimation and testing procedures for generalized additive (interaction) models are developed. We present extensions of several existing procedures for additive models when the link is the identity. This set of methods includes estimation of all component functions and their derivatives, testing functional forms and in particular variable selection. Theorems and simulation results are presented for the fundamentally new procedures. These comprise of, in particular, the introduction of local polynomial smoothing for this kind of models and the testing, including variable selection. Our method is straightforward to implement and the simulation studies show good performance in even small data sets.  相似文献   

4.
We introduce a new goodness-of-fit test which can be applied to hypothesis testing about the marginal distribution of dependent data. We derive a new test for the equivalent hypothesis in the space of wavelet coefficients. Such properties of the wavelet transform as orthogonality, localisation and sparsity make the hypothesis testing in wavelet domain easier than in the domain of distribution functions. We propose to test the null hypothesis separately at each wavelet decomposition level to overcome the problem of bi-dimensionality of wavelet indices and to be able to find the frequency where the empirical distribution function differs from the null in case the null hypothesis is rejected. We suggest a test statistic and state its asymptotic distribution under the null and under some of the alternative hypotheses.  相似文献   

5.
Functional data are being observed frequently in many scientific fields, and therefore most of the standard statistical methods are being adapted for functional data. The multivariate analysis of variance problem for functional data is considered. It seems to be of practical interest similarly as the one-way analysis of variance for such data. For the MANOVA problem for multivariate functional data, we propose permutation tests based on a basis function representation and tests based on random projections. Their performance is examined in comprehensive simulation studies, which provide an idea of the size control and power of the tests and identify differences between them. The simulation experiments are based on artificial data and real labeled multivariate time series data found in the literature. The results suggest that the studied testing procedures can detect small differences between vectors of curves even with small sample sizes. Illustrative real data examples of the use of the proposed testing procedures in practice are also presented.  相似文献   

6.
Statistical inference in the wavelet domain remains a vibrant area of contemporary statistical research because of desirable properties of wavelet representations and the need of scientific community to process, explore, and summarize massive data sets. Prime examples are biomedical, geophysical, and internet related data. We propose two new approaches to wavelet shrinkage/thresholding.

In the spirit of Efron and Tibshirani's recent work on local false discovery rate, we propose Bayesian Local False Discovery Rate (BLFDR), where the underlying model on wavelet coefficients does not assume known variances. This approach to wavelet shrinkage is shown to be connected with shrinkage based on Bayes factors. The second proposal, Bayesian False Discovery Rate (BaFDR), is based on ordering of posterior probabilities of hypotheses on true wavelets coefficients being null, in Bayesian testing of multiple hypotheses.

We demonstrate that both approaches result in competitive shrinkage methods by contrasting them to some popular shrinkage techniques.  相似文献   

7.
In this study, we use the wavelet analysis to construct a test statistic to test for the existence of a trend in the series. We also propose a new approach for testing the presence of trend based on the periodogram of the data. Since we are also interested in the presence of a long-memory process among the data, we study the properties of our test statistics under different degrees of dependency. We compare the results when using the band periodogram test and the wavelet test with results obtained by applying the ordinary least squares (OLS) method under the same conditions.  相似文献   

8.
We consider a process that is observed as a mixture of two random distributions, where the mixing probability is an unknown function of time. The setup is built upon a wavelet‐based mixture regression. Two linear wavelet estimators are proposed. Furthermore, we consider three regularizing procedures for each of the two wavelet methods. We also discuss regularity conditions under which the consistency of the wavelet methods is attained and derive rates of convergence for the proposed estimators. A Monte Carlo simulation study is conducted to illustrate the performance of the estimators. Various scenarios for the mixing probability function are used in the simulations, in addition to a range of sample sizes and resolution levels. We apply the proposed methods to a data set consisting of array Comparative Genomic Hybridization from glioblastoma cancer studies.  相似文献   

9.
We consider hypothesis testing and estimation of carry-over effects in continuous data under an incomplete block crossover design when comparing two experimental treatments with a placebo. We develop procedures for testing differential carry-over effects based on the weighted-least-squares (WLS) method. We apply Monte Carlo simulations to evaluate the performance of these test procedures in a variety of situations. We use the data regarding the forced expiratory volume in one second (FEV1) readings taken from a double-blind crossover trial comparing two different doses of formoterol with a placebo to illustrate the use of test procedures proposed here.  相似文献   

10.
In the framework of null hypothesis significance testing for functional data, we propose a procedure able to select intervals of the domain imputable for the rejection of a null hypothesis. An unadjusted p-value function and an adjusted one are the output of the procedure, namely interval-wise testing. Depending on the sort and level α of type-I error control, significant intervals can be selected by thresholding the two p-value functions at level α. We prove that the unadjusted (adjusted) p-value function point-wise (interval-wise) controls the probability of type-I error and it is point-wise (interval-wise) consistent. To enlighten the gain in terms of interpretation of the phenomenon under study, we applied the interval-wise testing to the analysis of a benchmark functional data set, i.e. Canadian daily temperatures. The new procedure provides insights that current state-of-the-art procedures do not, supporting similar advantages in the analysis of functional data with less prior knowledge.  相似文献   

11.
Methods for a sequential test of a dose-response effect in pre-clinical studies are investigated. The objective of the test procedure is to compare several dose groups with a zero-dose control. The sequential testing is conducted within a closed family of one-sided tests. The procedures investigated are based on a monotonicity assumption. These closed procedures strongly control the familywise error rate while providing information about the shape of the dose-responce relationship. Performance of sequential testing procedures are compared via a Monte Carlo simulation study. We illustrae the procedures by application to a real data set.  相似文献   

12.
This paper investigates the test procedures for testing the homogeneity of the proportions in the analysis of clustered binary data in the context of unequal dispersions across the treatment groups. We introduce a simple test procedure based on adjusted proportions using a sandwich estimator of the variance of the proportion estimators obtained by the generalized estimating equations approach of Zeger and Liang (1986) [Biometrics 42, 121-130]. We also extend the exiting test procedures of testing the hypothesis of proportions in this context. These test procedures are then compared, by simulations, in terms of size and power. Moreover, we derive the score test for testing the homogeneity of the dispersion parameters among several groups of clustered binary data. An illustrative application of the recommended test procedures is also presented.  相似文献   

13.
We present a unifying approach to multiple testing procedures for sequential (or streaming) data by giving sufficient conditions for a sequential multiple testing procedure to control the familywise error rate (FWER). Together, we call these conditions a ‘rejection principle for sequential tests’, which we then apply to some existing sequential multiple testing procedures to give simplified understanding of their FWER control. Next, the principle is applied to derive two new sequential multiple testing procedures with provable FWER control, one for testing hypotheses in order and another for closed testing. Examples of these new procedures are given by applying them to a chromosome aberration data set and finding the maximum safe dose of a treatment.  相似文献   

14.
We consider the problem of hypothesis testing of the equality of marginal survival distributions observed from paired lifetime data. Usual procedures include the paired t-test, which may perform poor for certain types of data. We propose asymptotic tests based on gamma frailty models with Weibull conditional distributions, and investigate their theoretical properties using large sample theory. For finite samples, we conduct simulations to evaluate the powers of the associated tests. For moderate and less skewed data, the proposed tests are the most powerful among the commonly applied testing procedures. A data example is illustrated to demonstrate the methods.  相似文献   

15.
Pairwise comparisons for proportions estimated by pooled testing   总被引:1,自引:0,他引:1  
When estimating the prevalence of a rare trait, pooled testing can confer substantial benefits when compared to individual testing. In addition to screening experiments for infectious diseases in humans, pooled testing has also been exploited in other applications such as drug testing, epidemiological studies involving animal disease, plant disease assessment, and screening for rare genetic mutations. Within a pooled-testing context, we consider situations wherein different strata or treatments are to be compared with the goals of assessing significant and practical differences between strata and ranking strata in terms of prevalence. To achieve these goals, we first present two simultaneous pairwise interval estimation procedures for use with pooled data. Our procedures rely on asymptotic results, so we investigate small-sample behavior and compare the two procedures in terms of simultaneous coverage probability and mean interval length. We then present a unified approach to determine pool sizes which deliver desired coverage properties while taking testing costs and interval precision into account. We illustrate our methods using data from an observational HIV study involving heterosexual males who use intravenous drugs.  相似文献   

16.
Image processing through multiscale analysis and measurement noise modeling   总被引:2,自引:0,他引:2  
We describe a range of powerful multiscale analysis methods. We also focus on the pivotal issue of measurement noise in the physical sciences. From multiscale analysis and noise modeling, we develop a comprehensive methodology for data analysis of 2D images, 1D signals (or spectra), and point pattern data. Noise modeling is based on the following: (i) multiscale transforms, including wavelet transforms; (ii) a data structure termed the multiresolution support; and (iii) multiple scale significance testing. The latter two aspects serve to characterize signal with respect to noise. The data analysis objectives we deal with include noise filtering and scale decomposition for visualization or feature detection.  相似文献   

17.
The decorrelating property of the discrete wavelet transformation (DWT) appears valuable because one can avoid estimating the correlation structure in the original data space by bootstrap resampling of the DWT. Several authors have shown that the wavestrap approximately retains the correlation structure of observations. However, simply retaining the same correlation structure of original observations does not guarantee enough variation for regression parameter estimators. Our simulation studies show that these wavestraps yield undercoverage of parameters for a simple linear regression for time series data of the type that arise in functional MRI experiments. It is disappointing that the wavestrap does not even provide valid resamples for both white noise sequences and fractional Brownian noise sequences. Thus, the wavestrap method is not completely valid in obtaining resamples related to linear regression analysis and should be used with caution for hypothesis testing as well. The reasons for these undercoverages are also discussed. A parametric bootstrap resampling in the wavelet domain is introduced to offer insight into these previously undiscovered defects in wavestrapping.  相似文献   

18.
Functional data analysis has emerged as a new area of statistical research with a wide range of applications. In this paper, we propose novel models based on wavelets for spatially correlated functional data. These models enable one to regularize curves observed over space and predict curves at unobserved sites. We compare the performance of these Bayesian models with several priors on the wavelet coefficients using the posterior predictive criterion. The proposed models are illustrated in the analysis of porosity data.  相似文献   

19.
Functional logistic regression is becoming more popular as there are many situations where we are interested in the relation between functional covariates (as input) and a binary response (as output). Several approaches have been advocated, and this paper goes into detail about three of them: dimension reduction via functional principal component analysis, penalized functional regression, and wavelet expansions in combination with Least Absolute Shrinking and Selection Operator penalization. We discuss the performance of the three methods on simulated data and also apply the methods to data regarding lameness detection for horses. Emphasis is on classification performance, but we also discuss estimation of the unknown parameter function.  相似文献   

20.
We introduce classical approaches for testing hypotheses on the meiosis I non disjunction fraction in trisomies, such as the likelihood-ratio, bootstrap, and Monte Carlo procedures. To calculate the p-values for the bootstrap and Monte Carlo procedures, different transformations in the data are considered. Bootstrap confidence intervals are also used as a tool to perform hypotheses tests. A Monte Carlo study is carried out to compare the proposed test procedures with two Bayesian ones: Jeffreys and Pereira-Stern tests. The results show that the likelihood-ratio and the Bayesian tests present the best performance. Down syndrome data are analyzed to illustrate the procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号