首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 699 毫秒
1.
We consider the problem of choosing the ridge parameter. Two penalized maximum likelihood (PML) criteria based on a distribution-free and a data-dependent penalty function are proposed. These PML criteria can be considered as “continuous” versions of AIC. A systematic simulation is conducted to compare the suggested criteria to several existing methods. The simulation results strongly support the use of our method. The method is also applied to two real data sets.  相似文献   

2.
In this article, we propose a new empirical information criterion (EIC) for model selection which penalizes the likelihood of the data by a non-linear function of the number of parameters in the model. It is designed to be used where there are a large number of time series to be forecast. However, a bootstrap version of the EIC can be used where there is a single time series to be forecast. The EIC provides a data-driven model selection tool that can be tuned to the particular forecasting task.

We compare the EIC with other model selection criteria including Akaike’s information criterion (AIC) and Schwarz’s Bayesian information criterion (BIC). The comparisons show that for the M3 forecasting competition data, the EIC outperforms both the AIC and BIC, particularly for longer forecast horizons. We also compare the criteria on simulated data and find that the EIC does better than existing criteria in that case also.  相似文献   

3.
Rhythm Grover  Amit Mitra 《Statistics》2018,52(5):1060-1085
Chirp signals are quite common in many natural and man-made systems such as audio signals, sonar, and radar. Estimation of the unknown parameters of a signal is a fundamental problem in statistical signal processing. Recently, Kundu and Nandi [Parameter estimation of chirp signals in presence of stationary noise. Stat Sin. 2008;75:187–201] studied the asymptotic properties of least squares estimators (LSEs) of the unknown parameters of a simple chirp signal model under the assumption of stationary noise. In this paper, we propose periodogram-type estimators called the approximate least squares estimators (ALSEs) to estimate the unknown parameters and study the asymptotic properties of these estimators under the same error assumptions. It is observed that the ALSEs are strongly consistent and asymptotically equivalent to the LSEs. Similar to the periodogram estimators, these estimators can also be used as initial guesses to find the LSEs of the unknown parameters. We perform some numerical simulations to see the performance of the proposed estimators and compare them with the LSEs and the estimators proposed by Lahiri et al. [Efficient algorithm for estimating the parameters of two dimensional chirp signal. Sankhya B. 2013;75(1):65–89]. We have analysed two real data sets for illustrative purposes.  相似文献   

4.
Order selection is an important step in the application of finite mixture models. Classical methods such as AIC and BIC discourage complex models with a penalty directly proportional to the number of mixing components. In contrast, Chen and Khalili propose to link the penalty to two types of overfitting. In particular, they introduce a regularization penalty to merge similar subpopulations in a mixture model, where the shrinkage idea of regularized regression is seamlessly employed. However, the new method requires an effective and efficient algorithm. When the popular expectation-maximization (EM)-algorithm is used, we need to maximize a nonsmooth and nonconcave objective function in the M-step, which is computationally challenging. In this article, we show that such an objective function can be transformed into a sum of univariate auxiliary functions. We then design an iterative thresholding descent algorithm (ITD) to efficiently solve the associated optimization problem. Unlike many existing numerical approaches, the new algorithm leads to sparse solutions and thereby avoids undesirable ad hoc steps. We establish the convergence of the ITD and further assess its empirical performance using both simulations and real data examples.  相似文献   

5.
Model choice is one of the most crucial aspect in any statistical data analysis. It is well known that most models are just an approximation to the true data-generating process but among such model approximations, it is our goal to select the ‘best’ one. Researchers typically consider a finite number of plausible models in statistical applications, and the related statistical inference depends on the chosen model. Hence, model comparison is required to identify the ‘best’ model among several such candidate models. This article considers the problem of model selection for spatial data. The issue of model selection for spatial models has been addressed in the literature by the use of traditional information criteria-based methods, even though such criteria have been developed based on the assumption of independent observations. We evaluate the performance of some of the popular model selection critera via Monte Carlo simulation experiments using small to moderate samples. In particular, we compare the performance of some of the most popular information criteria such as Akaike information criterion (AIC), Bayesian information criterion, and corrected AIC in selecting the true model. The ability of these criteria to select the correct model is evaluated under several scenarios. This comparison is made using various spatial covariance models ranging from stationary isotropic to nonstationary models.  相似文献   

6.
Two different forms of Akaike's information criterion (AIC) are compared for selecting the smooth terms in penalized spline additive mixed models. The conditional AIC (cAIC) has been used traditionally as a criterion for both estimating penalty parameters and selecting covariates in smoothing, and is based on the conditional likelihood given the smooth mean and on the effective degrees of freedom for a model fit. By comparison, the marginal AIC (mAIC) is based on the marginal likelihood from the mixed‐model formulation of penalized splines which has recently become popular for estimating smoothing parameters. To the best of the authors' knowledge, the use of mAIC for selecting covariates for smoothing in additive models is new. In the competing models considered for selection, covariates may have a nonlinear effect on the response, with the possibility of group‐specific curves. Simulations are used to compare the performance of cAIC and mAIC in model selection settings that have correlated and hierarchical smooth terms. In moderately large samples, both formulations of AIC perform extremely well at detecting the function that generated the data. The mAIC does better for simple functions, whereas the cAIC is more sensitive to detecting a true model that has complex and hierarchical terms.  相似文献   

7.
Abstract.  A vector-valued estimating function, such as the quasi-score, is typically not the gradient of any objective function. Consequently, an analogue of the likelihood function cannot be unambiguously defined by integrating the estimating function. This paper studies an analogue of the likelihood inference in the framework of optimal estimating functions. We propose a quadratic artificial likelihood function for an optimal estimating function. The objective function is uniquely identified as the potential function from the vector field decomposition by imposing some natural restriction on the divergence-free part. The artificial likelihood function is shown to resemble a genuine likelihood function in a number of respects. A bootstrap version of the artificial likelihood function is also studied, which may be used for selecting a root as an estimate from among multiple roots to an estimating equation.  相似文献   

8.
As a natural successor of the information criteria AIC and ABIC, information criteria for the Bayes models were developed by evaluating the bias of the log likelihood of the predictive distribution as an estimate of its expected log-likelihood. Considering two specific situations for the true distribution, two information criteria, PIC1 and PIC2 are derived. Linear Gaussian cases are considered in details and the evaluation of the maximum a posteriori estimator is also considered. By a simple example of estimating the signal to noise ratio, it was shown that the PIC2 is a good approximation to the expected log-likelihood in the entire region of the signal to noise ratio. On the other hand, PIC1 performs good only for the smaller values of the variance ratio. For illustration, the problems of trend estimation and seasonal adjustment are considered. Examples show that the hyper-parameters estimated by the new criteria are usually closer to the best ones than those by the ABIC.  相似文献   

9.
This paper describes inference methods for functional data under the assumption that the functional data of interest are smooth latent functions, characterized by a Gaussian process, which have been observed with noise over a finite set of time points. The methods we propose are completely specified in a Bayesian environment that allows for all inferences to be performed through a simple Gibbs sampler. Our main focus is in estimating and describing uncertainty in the covariance function. However, these models also encompass functional data estimation, functional regression where the predictors are latent functions, and an automatic approach to smoothing parameter selection. Furthermore, these models require minimal assumptions on the data structure as the time points for observations do not need to be equally spaced, the number and placement of observations are allowed to vary among functions, and special treatment is not required when the number of functional observations is less than the dimensionality of those observations. We illustrate the effectiveness of these models in estimating latent functional data, capturing variation in the functional covariance estimate, and in selecting appropriate smoothing parameters in both a simulation study and a regression analysis of medfly fertility data.  相似文献   

10.
Accurate estimation of the parameters of superimposed sinusoidal signals is an important problem in digital signal processing and time series analysis. In this article, we propose a simultaneous estimation procedure for estimation of the number of signals and signal parameters. The proposed sequential method is based on a robust bivariate M-periodogram and uses the orthogonal structure of the superimposed sinusoidal model for sequential estimation. Extensive simulations and data analysis show that the proposed method has a high degree of frequency resolution capability and can provide robust and efficient estimates of the number of signals and signal parameters.  相似文献   

11.
12.
In this paper, we consider the problem of estimating the number of components of a superimposed nonlinear sinusoids model of a signal in the presence of additive noise. We propose and provide a detailed empirical comparison of robust methods for estimation of the number of components. The proposed methods, which are robust modifications of the commonly used information theoretic criteria, are based on various M-estimator approaches and are robust with respect to outliers present in the data and heavy-tailed noise. The proposed methods are compared with the usual non-robust methods through extensive simulations under varied model scenarios. We also present real signal analysis of two speech signals to show the usefulness of the proposed methodology.  相似文献   

13.
We consider the problem of estimating the bearing of a remote object given measurements on a particular type of non-scanning radar, namely a focal-plane array. Such a system focuses incoming radiation through a lens onto an array of detectors. The problem is to estimate the angular position of the radiation source given measurements on the array of detectors and knowledge of the properties of the lens. The training data are essentially noiseless, and an estimator is derived for noisy test conditions. An approach based on kernel basis functions is developed. The estimate of the basis function weights is achieved through a regularization or roughness penalty approach. Choosing the regularization parameter to be proportional to the inverse of the input signal-to-noise ratio leads to a minimum prediction error. Experimental results for a 12-element detector array support the theoretical predictions.  相似文献   

14.
This paper proposes an adaptive model selection criterion with a data-driven penalty term. We treat model selection as an equality constrained minimization problem and develop an adaptive model selection procedure based on the Lagrange optimization method. In contrast to Akaike's information criterion (AIC), Bayesian information criterion (BIC) and most other existing criteria, this new criterion is to minimize the model size and take a measure of lack-of-fit as an adaptive penalty. Both theoretical results and simulations illustrate the power of this criterion with respect to consistency and pointwise asymptotic loss efficiency in the parametric and nonparametric cases.  相似文献   

15.
We use semi-parametric efficiency theory to derive a class of estimators for the state occupation probabilities of the continuous-time irreversible illness-death model. We consider both the setting with and without additional baseline information available, where we impose no specific functional form on the intensity functions of the model. We show that any estimator in the class is asymptotically linear under suitable assumptions about the estimators of the intensity functions. In particular, the assumptions are weak enough to allow the use of data-adaptive methods, which is important for making the identifying assumption of coarsening at random plausible in realistic settings. We suggest a flexible method for estimating the transition intensity functions of the illness-death model based on penalized Poisson regression. We apply this method to estimate the nuisance parameters of an illness-death model in a simulation study and a real-world application.  相似文献   

16.
Most parametric statistical methods are based on a set of assumptions: normality, linearity and homoscedasticity. Transformation of a metric response is a popular method to meet these assumptions. In particular, transformation of the response of a linear model is a popular method when attempting to satisfy the Gaussian assumptions on the error components in the model. A particular problem with common transformations such as the logarithm or the Box–Cox family is that negative and zero data values cannot be transformed. This paper proposes a new transformation which allows negative and zero data values. The method for estimating the transformation parameter consider an objective criteria based on kurtosis and skewness for achieving normality. Use of the new transformation and the method for estimating the transformation parameter are illustrated with three data sets.  相似文献   

17.
The Cox proportional hazards model is widely used in clinical trials with time-to-event outcomes to compare an experimental treatment with the standard of care. At the design stage of a trial the number of events required to achieve a desired power needs to be determined, which is frequently based on estimating the variance of the maximum partial likelihood estimate of the regression parameter with a function of the number of events. Underestimating the variance at the design stage will lead to insufficiently powered studies, and overestimating the variance will lead to unnecessarily large trials. A simple approach to estimating the variance is introduced, which is compared with two widely adopted approaches in practice. Simulation results show that the proposed approach outperforms the standard ones and gives nearly unbiased estimates of the variance.  相似文献   

18.
This paper eals with the proplem on estimating the mean paramerer of a truncated normal distribution with known coefficient of variation. In the previous treatment of this problem most authors have used the sample standared deviation for estimating this parameter. In the present paper we use Gini’s coefficient of mean difference g and obtain the minimum variance unbiased estimate of the mean based on a linear function of the sample mean and g, It is shown that this new estimate has desirable properties for small samples as well as for large samples. We also give a numerical example.  相似文献   

19.
Summary.  We introduce a flexible marginal modelling approach for statistical inference for clustered and longitudinal data under minimal assumptions. This estimated estimating equations approach is semiparametric and the proposed models are fitted by quasi-likelihood regression, where the unknown marginal means are a function of the fixed effects linear predictor with unknown smooth link, and variance–covariance is an unknown smooth function of the marginal means. We propose to estimate the nonparametric link and variance–covariance functions via smoothing methods, whereas the regression parameters are obtained via the estimated estimating equations. These are score equations that contain nonparametric function estimates. The proposed estimated estimating equations approach is motivated by its flexibility and easy implementation. Moreover, if data follow a generalized linear mixed model, with either a specified or an unspecified distribution of random effects and link function, the model proposed emerges as the corresponding marginal (population-average) version and can be used to obtain inference for the fixed effects in the underlying generalized linear mixed model, without the need to specify any other components of this generalized linear mixed model. Among marginal models, the estimated estimating equations approach provides a flexible alternative to modelling with generalized estimating equations. Applications of estimated estimating equations include diagnostics and link selection. The asymptotic distribution of the proposed estimators for the model parameters is derived, enabling statistical inference. Practical illustrations include Poisson modelling of repeated epileptic seizure counts and simulations for clustered binomial responses.  相似文献   

20.
We present theoretical results on the random wavelet coefficients covariance structure. We use simple properties of the coefficients to derive a recursive way to compute the within- and across-scale covariances. We point out a useful link between the algorithm proposed and the two-dimensional discrete wavelet transform. We then focus on Bayesian wavelet shrinkage for estimating a function from noisy data. A prior distribution is imposed on the coefficients of the unknown function. We show how our findings on the covariance structure make it possible to specify priors that take into account the full correlation between coefficients through a parsimonious number of hyperparameters. We use Markov chain Monte Carlo methods to estimate the parameters and illustrate our method on bench-mark simulated signals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号