首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new procedure is proposed to estimate the jump location curve and surface in the two-dimensional (2D) and three-dimensional (3D) nonparametric jump regression models, respectively. In each of the 2D and 3D cases, our estimation procedure is motivated by the fact that, under some regularity conditions, the ridge location of the rotational difference kernel estimate (RDKE; Qiu in Sankhyā Ser. A 59, 268–294, 1997, and J. Comput. Graph. Stat. 11, 799–822, 2002; Garlipp and Müller in Sankhyā Ser. A 69, 55–86, 2007) obtained from the noisy image is asymptotically close to the jump location of the true image. Accordingly, a computational procedure based on the kernel smoothing method is designed to find the ridge location of RDKE, and the result is taken as the jump location estimate. The sequence relationship among the points comprising our jump location estimate is obtained. Our jump location estimate is produced without the knowledge of the range or shape of jump region. Simulation results demonstrate that the proposed estimation procedure can detect the jump location very well, and thus it is a useful alternative for estimating the jump location in each of the 2D and 3D cases.  相似文献   

2.
Model selection is a general paradigm which includes many statistical problems. One of the most fruitful and popular approaches to carry it out is the minimization of a penalized criterion. Birgé and Massart (Probab. Theory Relat. Fields 138:33–73, 2006) have proposed a promising data-driven method to calibrate such criteria whose penalties are known up to a multiplicative factor: the “slope heuristics”. Theoretical works validate this heuristic method in some situations and several papers report a promising practical behavior in various frameworks. The purpose of this work is twofold. First, an introduction to the slope heuristics and an overview of the theoretical and practical results about it are presented. Second, we focus on the practical difficulties occurring for applying the slope heuristics. A new practical approach is carried out and compared to the standard dimension jump method. All the practical solutions discussed in this paper in different frameworks are implemented and brought together in a Matlab graphical user interface called capushe. Supplemental Materials containing further information and an additional application, the capushe package and the datasets presented in this paper, are available on the journal Web site.  相似文献   

3.
This paper considers the problem of hypothesis testing in a simple panel data regression model with random individual effects and serially correlated disturbances. Following Baltagi et al. (Econom. J. 11:554–572, 2008), we allow for the possibility of non-stationarity in the regressor and/or the disturbance term. While Baltagi et al. (Econom. J. 11:554–572, 2008) focus on the asymptotic properties and distributions of the standard panel data estimators, this paper focuses on testing of hypotheses in this setting. One important finding is that unlike the time-series case, one does not necessarily need to rely on the “super-efficient” type AR estimator by Perron and Yabu (J. Econom. 151:56–69, 2009) to make an inference in the panel data. In fact, we show that the simple t-ratio always converges to the standard normal distribution, regardless of whether the disturbances and/or the regressor are stationary.  相似文献   

4.
In an earlier contribution to this journal, Kauermann and Weihs (Adv. Stat. Anal. 91(4):344 2007) addressed the lack of procedural understanding in statistical consulting: “Even though there seems to be a consensus that statistical consulting should be well structured and target-orientated, the range of activity and the process itself seem to be less well-understood.” While this issue appears to be rather new to statistical consultants, other consulting disciplines—in particular management consultants—have long come up with a viable approach that divides the typical consulting process into seven successive steps. Using this model as a frame allows for reflecting the approaches on statistical consulting suggested by authors published in AStA volume 91, number 4, and for adding value to statistical consulting in general.  相似文献   

5.
The subject of the present study is to analyze how accurately an elaborated price jump detection methodology by Barndorff-Nielsen and Shephard (J. Financ. Econom. 2:1–37, 2004a; 4:1–30, 2006) applies to financial time series characterized by less frequent trading. In this context, it is of primary interest to understand the impact of infrequent trading on two test statistics, applicable to disentangle contributions from price jumps to realized variance. In a simulation study, evidence is found that infrequent trading induces a sizable distortion of the test statistics towards overrejection. A new empirical investigation using high frequency information of the most heavily traded electricity forward contract of the Nord Pool Energy Exchange corroborates the evidence of the simulation. In line with the theory, a “zero-return-adjusted estimation” is introduced to reduce the bias in the test statistics, both illustrated in the simulation study and empirical case.  相似文献   

6.
Quantile regression, including median regression, as a more completed statistical model than mean regression, is now well known with its wide spread applications. Bayesian inference on quantile regression or Bayesian quantile regression has attracted much interest recently. Most of the existing researches in Bayesian quantile regression focus on parametric quantile regression, though there are discussions on different ways of modeling the model error by a parametric distribution named asymmetric Laplace distribution or by a nonparametric alternative named scale mixture asymmetric Laplace distribution. This paper discusses Bayesian inference for nonparametric quantile regression. This general approach fits quantile regression curves using piecewise polynomial functions with an unknown number of knots at unknown locations, all treated as parameters to be inferred through reversible jump Markov chain Monte Carlo (RJMCMC) of Green (Biometrika 82:711–732, 1995). Instead of drawing samples from the posterior, we use regression quantiles to create Markov chains for the estimation of the quantile curves. We also use approximate Bayesian factor in the inference. This method extends the work in automatic Bayesian mean curve fitting to quantile regression. Numerical results show that this Bayesian quantile smoothing technique is competitive with quantile regression/smoothing splines of He and Ng (Comput. Stat. 14:315–337, 1999) and P-splines (penalized splines) of Eilers and de Menezes (Bioinformatics 21(7):1146–1153, 2005).  相似文献   

7.
We propose a more efficient version of the slice sampler for Dirichlet process mixture models described by Walker (Commun. Stat., Simul. Comput. 36:45–54, 2007). This new sampler allows for the fitting of infinite mixture models with a wide-range of prior specifications. To illustrate this flexibility we consider priors defined through infinite sequences of independent positive random variables. Two applications are considered: density estimation using mixture models and hazard function estimation. In each case we show how the slice efficient sampler can be applied to make inference in the models. In the mixture case, two submodels are studied in detail. The first one assumes that the positive random variables are Gamma distributed and the second assumes that they are inverse-Gaussian distributed. Both priors have two hyperparameters and we consider their effect on the prior distribution of the number of occupied clusters in a sample. Extensive computational comparisons with alternative “conditional” simulation techniques for mixture models using the standard Dirichlet process prior and our new priors are made. The properties of the new priors are illustrated on a density estimation problem.  相似文献   

8.
This paper proposes a new probabilistic classification algorithm using a Markov random field approach. The joint distribution of class labels is explicitly modelled using the distances between feature vectors. Intuitively, a class label should depend more on class labels which are closer in the feature space, than those which are further away. Our approach builds on previous work by Holmes and Adams (J. R. Stat. Soc. Ser. B 64:295–306, 2002; Biometrika 90:99–112, 2003) and Cucala et al. (J. Am. Stat. Assoc. 104:263–273, 2009). Our work shares many of the advantages of these approaches in providing a probabilistic basis for the statistical inference. In comparison to previous work, we present a more efficient computational algorithm to overcome the intractability of the Markov random field model. The results of our algorithm are encouraging in comparison to the k-nearest neighbour algorithm.  相似文献   

9.
The purpose of this research are: (1) to obtain spline function estimation in non parametric regression for longitudinal data with and without considering the autocorrelation between data of observation within subject, (2) to develop the algorithm that generates simulation data with certain autocorrelation level based on size of sample (N) and error variance (EV), and (3) to establish shape of spline estimator in non parametric regression for longitudinal data to simulation with various level of autocorrelation, as well as compare DM and TM approaches in predicting spline estimator in the data simulation with different of autocorrelation observational data on within subject. The results of the application are as follows: (a) implementation of smoothing spline with penalized weighted least square (PWLS) approach with or without consideration of autocorrelation in general (in all sizes and all error variances levels) provides significantly different spline estimator when the autocorrelation level >0.8; (b) based on size comparison, spline estimator in non parametric regression smoothing spline with PLS approach with (DM), or without (DM) consideration of autocorrelation showed significantly different result in level of autocorrelation > 0.8 (in overall size, moderate and large sample size), and > 0.7 (in small sample size); (c) based on level of variance, spline estimator in non parametric regression smoothing spline with PLS approach with (DM), or without (DM) consideration of autocorrelation showed significantly different result in level of autocorrelation > 0.8 (in overall level of variance, moderate and large variance), and > 0.7 (in small variance).  相似文献   

10.
In nonparametric regression the smoothing parameter can be selected by minimizing a Mean Squared Error (MSE) based criterion. For spline smoothing one can also rewrite the smooth estimation as a Linear Mixed Model where the smoothing parameter appears as the a priori variance of spline basis coefficients. This allows to employ Maximum Likelihood (ML) theory to estimate the smoothing parameter as variance component. In this paper the relation between the two approaches is illuminated for penalized spline smoothing (P-spline) as suggested in Eilers and Marx Statist. Sci. 11(2) (1996) 89. Theoretical and empirical arguments are given showing that the ML approach is biased towards undersmoothing, i.e. it chooses a too complex model compared to the MSE. The result is in line with classical spline smoothing, even though the asymptotic arguments are different. This is because in P-spline smoothing a finite dimensional basis is employed while in classical spline smoothing the basis grows with the sample size.  相似文献   

11.
In order to guarantee confidentiality and privacy of firm-level data, statistical offices apply various disclosure limitation techniques. However, each anonymization technique has its protection limits such that the probability of disclosing the individual information for some observations is not minimized. To overcome this problem, we propose combining two separate disclosure limitation techniques, blanking and multiplication of independent noise, in order to protect the original dataset. The proposed approach yields a decrease in the probability of reidentifying/disclosing individual information and can be applied to linear and nonlinear regression models. We show how to combine the blanking method with the multiplicative measurement error method and how to estimate the model by combining the multiplicative Simulation-Extrapolation (M-SIMEX) approach from Nolte (, 2007) on the one side with the Inverse Probability Weighting (IPW) approach going back to Horwitz and Thompson (J. Am. Stat. Assoc. 47:663–685, 1952) and on the other side with matching methods, as an alternative to IPW, like the semiparametric M-Estimator proposed by Flossmann (, 2007). Based on Monte Carlo simulations, we show that multiplicative measurement error combined with blanking as a masking procedure does not necessarily lead to a severe reduction in the estimation quality, provided that its effects on the data generating process are known.  相似文献   

12.
In this paper we discuss new adaptive proposal strategies for sequential Monte Carlo algorithms—also known as particle filters—relying on criteria evaluating the quality of the proposed particles. The choice of the proposal distribution is a major concern and can dramatically influence the quality of the estimates. Thus, we show how the long-used coefficient of variation (suggested by Kong et al. in J. Am. Stat. Assoc. 89(278–288):590–599, 1994) of the weights can be used for estimating the chi-square distance between the target and instrumental distributions of the auxiliary particle filter. As a by-product of this analysis we obtain an auxiliary adjustment multiplier weight type for which this chi-square distance is minimal. Moreover, we establish an empirical estimate of linear complexity of the Kullback-Leibler divergence between the involved distributions. Guided by these results, we discuss adaptive designing of the particle filter proposal distribution and illustrate the methods on a numerical example. This work was partly supported by the National Research Agency (ANR) under the program “ANR-05-BLAN-0299”.  相似文献   

13.
Time series arising in practice often have an inherently irregular sampling structure or missing values, that can arise for example due to a faulty measuring device or complex time-dependent nature. Spectral decomposition of time series is a traditionally useful tool for data variability analysis. However, existing methods for spectral estimation often assume a regularly-sampled time series, or require modifications to cope with irregular or ‘gappy’ data. Additionally, many techniques also assume that the time series are stationary, which in the majority of cases is demonstrably not appropriate. This article addresses the topic of spectral estimation of a non-stationary time series sampled with missing data. The time series is modelled as a locally stationary wavelet process in the sense introduced by Nason et al. (J. R. Stat. Soc. B 62(2):271–292, 2000) and its realization is assumed to feature missing observations. Our work proposes an estimator (the periodogram) for the process wavelet spectrum, which copes with the missing data whilst relaxing the strong assumption of stationarity. At the centre of our construction are second generation wavelets built by means of the lifting scheme (Sweldens, Wavelet Applications in Signal and Image Processing III, Proc. SPIE, vol. 2569, pp. 68–79, 1995), designed to cope with irregular data. We investigate the theoretical properties of our proposed periodogram, and show that it can be smoothed to produce a bias-corrected spectral estimate by adopting a penalized least squares criterion. We demonstrate our method with real data and simulated examples.  相似文献   

14.
In this paper, A variance decomposition approach to quantify the effects of endogenous and exogenous variables for nonlinear time series models is developed. This decomposition is taken temporally with respect to the source of variation. The methodology uses Monte Carlo methods to affect the variance decomposition using the ANOVA-like procedures proposed in Archer et al. (J. Stat. Comput. Simul. 58:99–120, 1997), Sobol’ (Math. Model. 2:112–118, 1990). The results of this paper can be used in investment problems, biomathematics and control theory, where nonlinear time series with multiple inputs are encountered.  相似文献   

15.
This paper considers the analysis of multivariate survival data where the marginal distributions are specified by semiparametric transformation models, a general class including the Cox model and the proportional odds model as special cases. First, consideration is given to the situation where the joint distribution of all failure times within the same cluster is specified by the Clayton–Oakes model (Clayton, Biometrika 65:141–151, l978; Oakes, J R Stat Soc B 44:412–422, 1982). A two-stage estimation procedure is adopted by first estimating the marginal parameters under the independence working assumption, and then the association parameter is estimated from the maximization of the full likelihood function with the estimators of the marginal parameters plugged in. The asymptotic properties of all estimators in the semiparametric model are derived. For the second situation, the third and higher order dependency structures are left unspecified, and interest focuses on the pairwise correlation between any two failure times. Thus, the pairwise association estimate can be obtained in the second stage by maximizing the pairwise likelihood function. Large sample properties for the pairwise association are also derived. Simulation studies show that the proposed approach is appropriate for practical use. To illustrate, a subset of the data from the Diabetic Retinopathy Study is used.  相似文献   

16.
Self-organizing maps (SOMs) introduced by Kohonen (Biol. Cybern. 43(1):59–69, 1982) are well-known in the field of artificial neural networks. The way SOMs are performing is very intuitive, leading to great popularity and numerous applications (related to statistics: classification, clustering). The result of the unsupervised learning process performed by SOMs is a non-linear, low-dimensional projection of the high-dimensional input data, that preserves certain features of the underlying data, e.g. the topology and probability distribution (Lee and Verleysen in Nonlinear Dimensionality Reduction, Springer, 2007; Kohonen in Self-organizing Maps, 3rd edn., Springer, 2001).  相似文献   

17.
In this paper we present a review of population-based simulation for static inference problems. Such methods can be described as generating a collection of random variables {X n } n=1,…,N in parallel in order to simulate from some target density π (or potentially sequence of target densities). Population-based simulation is important as many challenging sampling problems in applied statistics cannot be dealt with successfully by conventional Markov chain Monte Carlo (MCMC) methods. We summarize population-based MCMC (Geyer, Computing Science and Statistics: The 23rd Symposium on the Interface, pp. 156–163, 1991; Liang and Wong, J. Am. Stat. Assoc. 96, 653–666, 2001) and sequential Monte Carlo samplers (SMC) (Del Moral, Doucet and Jasra, J. Roy. Stat. Soc. Ser. B 68, 411–436, 2006a), providing a comparison of the approaches. We give numerical examples from Bayesian mixture modelling (Richardson and Green, J. Roy. Stat. Soc. Ser. B 59, 731–792, 1997).  相似文献   

18.
19.
In randomized clinical trials, we are often concerned with comparing two-sample survival data. Although the log-rank test is usually suitable for this purpose, it may result in substantial power loss when the two groups have nonproportional hazards. In a more general class of survival models of Yang and Prentice (Biometrika 92:1–17, 2005), which includes the log-rank test as a special case, we improve model efficiency by incorporating auxiliary covariates that are correlated with the survival times. In a model-free form, we augment the estimating equation with auxiliary covariates, and establish the efficiency improvement using the semiparametric theories in Zhang et al. (Biometrics 64:707–715, 2008) and Lu and Tsiatis (Biometrics, 95:674–679, 2008). Under minimal assumptions, our approach produces an unbiased, asymptotically normal estimator with additional efficiency gain. Simulation studies and an application to a leukemia study show the satisfactory performance of the proposed method.  相似文献   

20.
In view of its ongoing importance for a variety of practical applications, feature selection via 1-regularization methods like the lasso has been subject to extensive theoretical as well empirical investigations. Despite its popularity, mere 1-regularization has been criticized for being inadequate or ineffective, notably in situations in which additional structural knowledge about the predictors should be taken into account. This has stimulated the development of either systematically different regularization methods or double regularization approaches which combine 1-regularization with a second kind of regularization designed to capture additional problem-specific structure. One instance thereof is the ‘structured elastic net’, a generalization of the proposal in Zou and Hastie (J. R. Stat. Soc. Ser. B 67:301–320, 2005), studied in Slawski et al. (Ann. Appl. Stat. 4(2):1056–1080, 2010) for the class of generalized linear models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号