首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   13篇
  免费   0篇
统计学   13篇
  2023年   1篇
  2021年   1篇
  2018年   2篇
  2016年   1篇
  2014年   1篇
  2013年   2篇
  2012年   2篇
  2011年   1篇
  2009年   1篇
  2004年   1篇
排序方式: 共有13条查询结果,搜索用时 15 毫秒
1.
We analyze a variant of the EGARCH model which captures the variation of the intra-day price. We study the asymptotic behavior of the estimators for the parameters of the model. We also illustrate our theoretical results by empirical studies.  相似文献   
2.
3.
We are concerned with the estimation of the exterior surface and interior summaries of tube-shaped anatomical structures. This interest is motivated by two distinct scientific goals, one dealing with the distribution of HIV microbicide in the colon and the other with measuring degradation in white-matter tracts in the brain. Our problem is posed as the estimation of the support of a distribution in three dimensions from a sample from that distribution, possibly measured with error. We propose a novel tube-fitting algorithm to construct such estimators. Further, we conduct a simulation study to aid in the choice of a key parameter of the algorithm, and we test our algorithm with validation study tailored to the motivating data sets. Finally, we apply the tube-fitting algorithm to a colon image produced by single photon emission computed tomography (SPECT) and to a white-matter tract image produced using diffusion tensor imaging (DTI).  相似文献   
4.
5.
We propose, develop, and implement a fully Bayesian inferential approach for the Cox model when the log hazard function contains unknown smooth functions of the variables measured with error. Our approach is to model nonparametrically both the log-baseline hazard and the smooth components of the log-hazard functions using low-rank penalized splines. Careful implementation of the Bayesian inferential machinery is shown to produce remarkably better results than the naive approach. Our methodology was motivated by and applied to the study of progression time to chronic kidney disease as a function of baseline kidney function and applied to the Atherosclerosis Risk in Communities study, a large epidemiological cohort study. This article has supplementary material online.  相似文献   
6.
Penalized spline regression using a mixed effects representation is one of the most popular nonparametric regression tools to estimate an unknown regression function $f(\cdot )$ . In this context testing for polynomial regression against a general alternative is equivalent to testing for a zero variance component. In this paper, we fill the gap between different published null distributions of the corresponding restricted likelihood ratio test under different assumptions. We show that: (1) the asymptotic scenario is determined by the choice of the penalty and not by the choice of the spline basis or number of knots; (2) non-standard asymptotic results correspond to common penalized spline penalties on derivatives of $f(\cdot )$ , which ensure good power properties; and (3) standard asymptotic results correspond to penalized spline penalties on $f(\cdot )$ itself, which lead to sizeable power losses under smooth alternatives. We provide simple and easy to use guidelines for the restricted likelihood ratio test in this context.  相似文献   
7.
This paper introduces a general framework for testing hypotheses about the structure of the mean function of complex functional processes. Important particular cases of the proposed framework are as follows: (1) testing the null hypothesis that the mean of a functional process is parametric against a general alternative modelled by penalized splines; and (2) testing the null hypothesis that the means of two possibly correlated functional processes are equal or differ by only a simple parametric function. A global pseudo‐likelihood ratio test is proposed, and its asymptotic distribution is derived. The size and power properties of the test are confirmed in realistic simulation scenarios. Finite‐sample power results indicate that the proposed test is much more powerful than competing alternatives. Methods are applied to testing the equality between the means of normalized δ‐power of sleep electroencephalograms of subjects with sleep‐disordered breathing and matched controls.  相似文献   
8.
Summary.  We consider the problem of testing null hypotheses that include restrictions on the variance component in a linear mixed model with one variance component and we derive the finite sample and asymptotic distribution of the likelihood ratio test and the restricted likelihood ratio test. The spectral representations of the likelihood ratio test and the restricted likelihood ratio test statistics are used as the basis of efficient simulation algorithms of their null distributions. The large sample χ 2 mixture approximations using the usual asymptotic theory for a null hypothesis on the boundary of the parameter space have been shown to be poor in simulation studies. Our asymptotic calculations explain these empirical results. The theory of Self and Liang applies only to linear mixed models for which the data vector can be partitioned into a large number of independent and identically distributed subvectors. One-way analysis of variance and penalized splines models illustrate the results.  相似文献   
9.
Smoothing of noisy sample covariances is an important component in functional data analysis. We propose a novel covariance smoothing method based on penalized splines and associated software. The proposed method is a bivariate spline smoother that is designed for covariance smoothing and can be used for sparse functional or longitudinal data. We propose a fast algorithm for covariance smoothing using leave-one-subject-out cross-validation. Our simulations show that the proposed method compares favorably against several commonly used methods. The method is applied to a study of child growth led by one of coauthors and to a public dataset of longitudinal CD4 counts.  相似文献   
10.
Fingerprinting of functional connectomes is an increasingly standard measure of reproducibility in functional magnetic resonance imaging connectomics. In such studies, one attempts to match a subject's first session image with their second, in a blinded fashion, in a group of subjects measured twice. The number or percentage of correct matches is usually reported as a statistic, which is then used in permutation tests. Despite the simplicity and increasing popularity of such procedures, the soundness of the statistical tests, the power, and the factors impacting the test are unstudied. In this article, we investigate the statistical tests of matching based on exchangeability assumption in the fingerprinting analysis. We show that a nearly universal Poisson(1) approximation applies for different matching schemes. We theoretically investigate the permutation tests and explore the issue that the test is overly sensitive to uninteresting directions in the alternative hypothesis, such as clustering due to familial status or demographics. We perform a numerical study on two functional magnetic resonance imaging (fMRI) resting‐state datasets, the Human Connectome Project (HCP) and the Baltimore Longitudinal Study of Aging (BLSA). These datasets are instructive, as the HCP includes technical replications of long scans and includes monozygotic and dizygotic twins, as well as non‐twin siblings. In contrast, the BLSA study incorporates more typical length resting‐state scans in a longitudinal study. Finally, a study of single regional connections is performed on the HCP data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号