首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we study the construction of confidence intervals for a nonparametric regression function under linear process errors by using the blockwise technique. It is shown that the blockwise empirical likelihood (EL) ratio statistic is asymptotically distributed. The result is used to obtain EL based confidence intervals for the nonparametric regression function. The finite‐sample performance of the method is evaluated through a simulation study.  相似文献   

2.
Consider a linear regression model with n‐dimensional response vector, regression parameter and independent and identically distributed errors. Suppose that the parameter of interest is where a is a specified vector. Define the parameter where c and t are specified. Also suppose that we have uncertain prior information that . Part of our evaluation of a frequentist confidence interval for is the ratio (expected length of this confidence interval)/(expected length of standard confidence interval), which we call the scaled expected length of this interval. We say that a confidence interval for utilizes this uncertain prior information if: (i) the scaled expected length of this interval is substantially less than 1 when ; (ii) the maximum value of the scaled expected length is not too much larger than 1; and (iii) this confidence interval reverts to the standard confidence interval when the data happen to strongly contradict the prior information. Kabaila and Giri (2009) present a new method for finding such a confidence interval. Let denote the least squares estimator of . Also let and . Using computations and new theoretical results, we show that the performance of this confidence interval improves as increases and decreases.  相似文献   

3.
Let {N(t), t > 0} be a Poisson process with rate λ > 0, independent of the independent and identically distributed random variables with mean μ and variance . The stochastic process is then called a compound Poisson process and has a wide range of applications in, for example, physics, mining, finance and risk management. Among these applications, the average number of objects, which is defined to be λμ, is an important quantity. Although many papers have been devoted to the estimation of λμ in the literature, in this paper, we use the well‐known empirical likelihood method to construct confidence intervals. The simulation results show that the empirical likelihood method often outperforms the normal approximation and Edgeworth expansion approaches in terms of coverage probabilities. A real data set concerning coal‐mining disasters is analyzed using these methods.  相似文献   

4.
A reduced ‐statistic is a ‐statistic with its summands drawn from a restricted but balanced set of pairs. In this article, central limit theorems are derived for reduced ‐statistics under ‐mixing, which significantly extends the work of Brown & Kildea in various aspects. It will be shown and illustrated that reduced ‐statistics are quite useful in deriving test statistics in various nonparametric testing problems.  相似文献   

5.
This paper presents a new random weighting method for confidence interval estimation for the sample ‐quantile. A theory is established to extend ordinary random weighting estimation from a non‐smoothed function to a smoothed function, such as a kernel function. Based on this theory, a confidence interval is derived using the concept of backward critical points. The resultant confidence interval has the same length as that derived by ordinary random weighting estimation, but is distribution‐free, and thus it is much more suitable for practical applications. Simulation results demonstrate that the proposed random weighting method has higher accuracy than the Bootstrap method for confidence interval estimation.  相似文献   

6.
A joint estimation approach for multiple high‐dimensional Gaussian copula graphical models is proposed, which achieves estimation robustness by exploiting non‐parametric rank‐based correlation coefficient estimators. Although we focus on continuous data in this paper, the proposed method can be extended to deal with binary or mixed data. Based on a weighted minimisation problem, the estimators can be obtained by implementing second‐order cone programming. Theoretical properties of the procedure are investigated. We show that the proposed joint estimation procedure leads to a faster convergence rate than estimating the graphs individually. It is also shown that the proposed procedure achieves an exact graph structure recovery with probability tending to 1 under certain regularity conditions. Besides theoretical analysis, we conduct numerical simulations to compare the estimation performance and graph recovery performance of some state‐of‐the‐art methods including both joint estimation methods and estimation methods for individuals. The proposed method is then applied to a gene expression data set, which illustrates its practical usefulness.  相似文献   

7.
For modelling the location of pyramidal cells in the human cerebral cortex, we suggest a hierarchical point process in that exhibits anisotropy in the form of cylinders extending along the z-axis. The model consists first of a generalised shot noise Cox process for the xy-coordinates, providing cylindrical clusters, and next of a Markov random field model for the z-coordinates conditioned on the xy-coordinates, providing either repulsion, aggregation or both within specified areas of interaction. Several cases of these hierarchical point processes are fitted to two pyramidal cell data sets, and of these a final model allowing for both repulsion and attraction between the points seem adequate. We discuss how the final model relates to the so-called minicolumn hypothesis in neuroscience.  相似文献   

8.
This paper deals with the study of dependencies between two given events modelled by point processes. In particular, we focus on the context of DNA to detect favoured or avoided distances between two given motifs along a genome suggesting possible interactions at a molecular level. For this, we naturally introduce a so‐called reproduction function h that allows to quantify the favoured positions of the motifs and that is considered as the intensity of a Poisson process. Our first interest is the estimation of this function h assumed to be well localized. The estimator based on random thresholds achieves an oracle inequality. Then, minimax properties of on Besov balls are established. Some simulations are provided, proving the good practical behaviour of our procedure. Finally, our method is applied to the analysis of the dependence between promoter sites and genes along the genome of the Escherichia coli bacterium.  相似文献   

9.
We consider in this paper the semiparametric mixture of two unknown distributions equal up to a location parameter. The model is said to be semiparametric in the sense that the mixed distribution is not supposed to belong to a parametric family. To insure the identifiability of the model, it is assumed that the mixed distribution is zero symmetric, the model being then defined by the mixing proportion, two location parameters and the probability density function of the mixed distribution. We propose a new class of M‐estimators of these parameters based on a Fourier approach and prove that they are ‐consistent under mild regularity conditions. Their finite sample properties are illustrated by a Monte Carlo study, and a benchmark real dataset is also studied with our method.  相似文献   

10.
11.
12.
In this paper, we consider the problem of estimating the Laplace transform of volatility within a fixed time interval [0,T] using high‐frequency sampling, where we assume that the discretized observations of the latent process are contaminated by microstructure noise. We use the pre‐averaging approach to deal with the effect of microstructure noise. Under the high‐frequency scenario, we obtain a consistent estimator whose convergence rate is , which is known as the optimal convergence rate of the estimation of integrated volatility functionals under the presence of microstructure noise. The related central limit theorem is established. The simulation studies justify the finite‐sample performance of the proposed estimator.  相似文献   

13.
14.
15.
16.
17.
We study estimation and feature selection problems in mixture‐of‐experts models. An $l_2$ ‐penalized maximum likelihood estimator is proposed as an alternative to the ordinary maximum likelihood estimator. The estimator is particularly advantageous when fitting a mixture‐of‐experts model to data with many correlated features. It is shown that the proposed estimator is root‐$n$ consistent, and simulations show its superior finite sample behaviour compared to that of the maximum likelihood estimator. For feature selection, two extra penalty functions are applied to the $l_2$ ‐penalized log‐likelihood function. The proposed feature selection method is computationally much more efficient than the popular all‐subset selection methods. Theoretically it is shown that the method is consistent in feature selection, and simulations support our theoretical results. A real‐data example is presented to demonstrate the method. The Canadian Journal of Statistics 38: 519–539; 2010 © 2010 Statistical Society of Canada  相似文献   

18.
Estimation of time‐average variance constant (TAVC), which is the asymptotic variance of the sample mean of a dependent process, is of fundamental importance in various fields of statistics. For frequentists, it is crucial for constructing confidence interval of mean and serving as a normalizing constant in various test statistics and so forth. For Bayesians, it is widely used for evaluating effective sample size and conducting convergence diagnosis in Markov chain Monte Carlo method. In this paper, by considering high‐order corrections to the asymptotic biases, we develop a new class of TAVC estimators that enjoys optimal ‐convergence rates under different degrees of the serial dependence of stochastic processes. The high‐order correction procedure is applicable to estimation of the so‐called smoothness parameter, which is essential in determining the optimal bandwidth. Comparisons with existing TAVC estimators are comprehensively investigated. In particular, the proposed optimal high‐order corrected estimator has the best performance in terms of mean squared error.  相似文献   

19.
The penalized maximum likelihood estimator (PMLE) has been widely used for variable selection in high-dimensional data. Various penalty functions have been employed for this purpose, e.g., Lasso, weighted Lasso, or smoothly clipped absolute deviations. However, the PMLE can be very sensitive to outliers in the data, especially to outliers in the covariates (leverage points). In order to overcome this disadvantage, the usage of the penalized maximum trimmed likelihood estimator (PMTLE) is proposed to estimate the unknown parameters in a robust way. The computation of the PMTLE takes advantage of the same technology as used for PMLE but here the estimation is based on subsamples only. The breakdown point properties of the PMTLE are discussed using the notion of $d$ -fullness. The performance of the proposed estimator is evaluated in a simulation study for the classical multiple linear and Poisson linear regression models.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号