首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
ABSTRACT

We present methods for modeling and estimation of a concurrent functional regression when the predictors and responses are two-dimensional functional datasets. The implementations use spline basis functions and model fitting is based on smoothing penalties and mixed model estimation. The proposed methods are implemented in available statistical software, allow the construction of confidence intervals for the bivariate model parameters, and can be applied to completely or sparsely sampled responses. Methods are tested to data in simulations and they show favorable results in practice. The usefulness of the methods is illustrated in an application to environmental data.  相似文献   

2.
ABSTRACT

Functional linear model is of great practical importance, as exemplified by applications in high-throughput studies such as meteorological and biomedical research. In this paper, we propose a new functional variable selection procedure, called functional variable selection via Gram–Schmidt (FGS) orthogonalization, for a functional linear model with a scalar response and multiple functional predictors. Instead of the regularization methods, FGS takes into account the similarity between the functional predictors in a data-driven way and utilizes the technique of Gram–Schmidt orthogonalization to remove the irrelevant predictors. FGS can successfully discriminate between the relevant and the irrelevant functional predictors to achieve a high true positive ratio without including many irrelevant predictors, and yield explainable models, which offers a new perspective for the variable selection method in the functional linear model. Simulation studies are carried out to evaluate the finite sample performance of the proposed method, and also a weather data set is analysed.  相似文献   

3.
This paper continues the study of the software reliability model of Fakhre-Zakeri & Slud (1995), an "exponential order statistic model" in the sense of Miller (1986) with general mixing distribution, imperfect debugging and large-sample asymptotics reflecting increase of the initial number of bugs with software size. The parameters of the model are θ (proportional to the initial number of bugs in the software), G (·, μ) (the mixing df, with finite dimensional unknown parameter μ, for the rates λ i with which the bugs in the software cause observable system failures), and p (the probability with which a detected bug is instantaneously replaced with another bug instead of being removed). Maximum likelihood estimation theory for (θ, p , μ) is applied to construct a likelihood-based score test for large sample data of the hypothesis of "perfect debugging" ( p = 0) vs "imperfect" ( p > 0) within the models studied. There are important models (including the Jelinski–Moranda) under which the score statistics with 1/√ n normalization are asymptotically degenerate. These statistics, illustrated on a software reliability data of Musa (1980), can serve nevertheless as important diagnostics for inadequacy of simple models  相似文献   

4.
In this article, we consider the problem of selecting functional variables using the L1 regularization in a functional linear regression model with a scalar response and functional predictors, in the presence of outliers. Since the LASSO is a special case of the penalized least-square regression with L1 penalty function, it suffers from the heavy-tailed errors and/or outliers in data. Recently, Least Absolute Deviation (LAD) and the LASSO methods have been combined (the LAD-LASSO regression method) to carry out robust parameter estimation and variable selection simultaneously for a multiple linear regression model. However, variable selection of the functional predictors based on LASSO fails since multiple parameters exist for a functional predictor. Therefore, group LASSO is used for selecting functional predictors since group LASSO selects grouped variables rather than individual variables. In this study, we propose a robust functional predictor selection method, the LAD-group LASSO, for a functional linear regression model with a scalar response and functional predictors. We illustrate the performance of the LAD-group LASSO on both simulated and real data.  相似文献   

5.
This paper considers likelihood-based estimation under the Cox proportional hazards model in the situations where some covariate entries are missing not at random. Assuming the conditional distribution of the missing entries is known, we demonstrate the existence of the semiparametric maximum likelihood estimator of the model parameters, establish the consistency and weak convergence. By simulation, we examine the finite-sample performance of the estimation procedure, and compare the SPMLE with the one resulted from using an estimated conditional distribution of the missing entries. We analyze the data from a tuberculosis (TB) study applying the proposed approach for illustration.  相似文献   

6.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients.  相似文献   

7.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients.  相似文献   

8.
Abstract.  Functional data analysis is a growing research field as more and more practical applications involve functional data. In this paper, we focus on the problem of regression and classification with functional predictors: the model suggested combines an efficient dimension reduction procedure [functional sliced inverse regression, first introduced by Ferré & Yao ( Statistics , 37, 2003 , 475)], for which we give a regularized version, with the accuracy of a neural network. Some consistency results are given and the method is successfully confronted to real-life data.  相似文献   

9.

Considering alternative models for exchange rates has always been a central issue in applied research. Despite this fact, formal likelihood-based comparisons of competing models are extremely rare. In this paper, we apply the Bayesian marginal likelihood concept to compare GARCH, stable, stable GARCH, stochastic volatility, and a new stable Paretian stochastic volatility model for seven major currencies. Inference is based on combining Monte Carlo methods with Laplace integration. The empirical results show that neither GARCH nor stable models are clear winners, and a GARCH model with stable innovations is the model best supported by the data.  相似文献   

10.
Principal fitted component (PFC) models are a class of likelihood-based inverse regression methods that yield a so-called sufficient reduction of the random p-vector of predictors X given the response Y. Assuming that a large number of the predictors has no information about Y, we aimed to obtain an estimate of the sufficient reduction that ‘purges’ these irrelevant predictors, and thus, select the most useful ones. We devised a procedure using observed significance values from the univariate fittings to yield a sparse PFC, a purged estimate of the sufficient reduction. The performance of the method is compared to that of penalized forward linear regression models for variable selection in high-dimensional settings.  相似文献   

11.
In this study, the empirical likelihood method is applied to the partially linear varying-coefficient model in which some covariates are measured with additive errors and the response variable is sometimes missing. Based on the correction-for-attenuation technique, we define an empirical likelihood-based statistic for the parametric component and show that its limiting distribution is chi-square distribution. The confidence regions of the parameters are constructed accordingly. Furthermore, a simulation study is conducted to evaluate the performance of the proposed method.  相似文献   

12.
13.
We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modelling continuous proportions that are affected by independent variables. We derive small-sample adjustments to the likelihood ratio statistic in this class of models. The adjusted statistics can be easily implemented from standard statistical software. We present Monte Carlo simulations showing that inference based on the adjusted statistics we propose is much more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.  相似文献   

14.
Parameter dependency within data sets in simulation studies is common, especially in models such as continuous-time Markov chains (CTMCs). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: (1) to develop a multivariate approach for assessing accuracy and precision for simulation studies (2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.  相似文献   

15.
Motivated by a biomarker study for colorectal neoplasia, we consider generalized functional linear models where the functional predictors are measured with errors at discrete design points. Assuming that the true functional predictor and the slope function are smooth, we investigate a two-step estimating procedure where both the true functional predictor and the slope function are estimated through spline smoothing. The operating characteristics of the proposed method are derived; the usefulness of the proposed method is illustrated by a simulation study as well as data analysis for the motivating colorectal neoplasia study.  相似文献   

16.
Estimation in conditional first order autoregression with discrete support   总被引:1,自引:0,他引:1  
We consider estimation in the class of first order conditional linear autoregressive models with discrete support that are routinely used to model time series of counts. Various groups of estimators proposed in the literature are discussed: moment-based estimators; regression-based estimators; and likelihood-based estimators. Some of these have been used previously and others not. In particular, we address the performance of new types of generalized method of moments estimators and propose an exact maximum likelihood procedure valid for a Poisson marginal model using backcasting. The small sample properties of all estimators are comprehensively analyzed using simulation. Three situations are considered using data generated with: a fixed autoregressive parameter and equidispersed Poisson innovations; negative binomial innovations; and, additionally, a random autoregressive coefficient. The first set of experiments indicates that bias correction methods, not hitherto used in this context to our knowledge, are some-times needed and that likelihood-based estimators, as might be expected, perform well. The second two scenarios are representative of overdispersion. Methods designed specifically for the Poisson context now perform uniformly badly, but simple, bias-corrected, Yule-Walker and least squares estimators perform well in all cases.  相似文献   

17.
Two-sample comparison problems are often encountered in practical projects and have widely been studied in literature. Owing to practical demands, the research for this topic under special settings such as a semiparametric framework have also attracted great attentions. Zhou and Liang (Biometrika 92:271–282, 2005) proposed an empirical likelihood-based semi-parametric inference for the comparison of treatment effects in a two-sample problem with censored data. However, their approach is actually a pseudo-empirical likelihood and the method may not be fully efficient. In this study, we develop a new empirical likelihood-based inference under more general framework by using the hazard formulation of censored data for two sample semi-parametric hybrid models. We demonstrate that our empirical likelihood statistic converges to a standard chi-squared distribution under the null hypothesis. We further illustrate the use of the proposed test by testing the ROC curve with censored data, among others. Numerical performance of the proposed method is also examined.  相似文献   

18.
An improved likelihood-based method based on Fraser et al. (1999) is proposed in this paper to test the significance of the second lag of the stationary AR(2) model. Compared with the test proposed by Fan and Yao (2003) and the signed log-likelihood ratio test, the proposed method has remarkable accuracy. Simulation studies are performed to illustrate the accuracy of the proposed method. Application of the proposed method on historical data is presented to demonstrate the implementation of this method. Furthermore, the method can be extended to the general AR(p) model.  相似文献   

19.
In this paper the periodic integer-valued autoregressive model of order one with period T, driven by a periodic sequence of independent Poisson-distributed random variables, is studied in some detail. Basic probabilistic and statistical properties of this model are discussed. Moreover, parameter estimation is also addressed. Specifically, the methods of estimation under analysis are the method of moments, least squares-type and likelihood-based ones. Their performance is compared through a simulation study.  相似文献   

20.
Summary.  Data comprising colony counts, or a binary variable representing fertile (or sterile) samples, as a dilution series of the containing medium are analysed by using extended Poisson process modelling. These models form a class of flexible probability distributions that are widely applicable to count and grouped binary data. Standard distributions such as Poisson and binomial, and those representing overdispersion and underdispersion relative to these distributions can be expressed within this class. For all the models in the class, likelihoods can be obtained. These models have not been widely used because of the perceived difficulty of performing the calculations and the lack of associated software. Exact calculation of the probabilities that are involved can be time consuming although accurate approximations that use considerably less computational time are available. Although dilution series data are the focus here, the models are applicable to any count or binary data. A benefit of the approach is the ability to draw likelihood-based inferences from the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号