首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we study the construction of confidence intervals for a nonparametric regression function under linear process errors by using the blockwise technique. It is shown that the blockwise empirical likelihood (EL) ratio statistic is asymptotically distributed. The result is used to obtain EL based confidence intervals for the nonparametric regression function. The finite‐sample performance of the method is evaluated through a simulation study.  相似文献   

2.
Let {N(t), t > 0} be a Poisson process with rate λ > 0, independent of the independent and identically distributed random variables with mean μ and variance . The stochastic process is then called a compound Poisson process and has a wide range of applications in, for example, physics, mining, finance and risk management. Among these applications, the average number of objects, which is defined to be λμ, is an important quantity. Although many papers have been devoted to the estimation of λμ in the literature, in this paper, we use the well‐known empirical likelihood method to construct confidence intervals. The simulation results show that the empirical likelihood method often outperforms the normal approximation and Edgeworth expansion approaches in terms of coverage probabilities. A real data set concerning coal‐mining disasters is analyzed using these methods.  相似文献   

3.
Consider a linear regression model with n‐dimensional response vector, regression parameter and independent and identically distributed errors. Suppose that the parameter of interest is where a is a specified vector. Define the parameter where c and t are specified. Also suppose that we have uncertain prior information that . Part of our evaluation of a frequentist confidence interval for is the ratio (expected length of this confidence interval)/(expected length of standard confidence interval), which we call the scaled expected length of this interval. We say that a confidence interval for utilizes this uncertain prior information if: (i) the scaled expected length of this interval is substantially less than 1 when ; (ii) the maximum value of the scaled expected length is not too much larger than 1; and (iii) this confidence interval reverts to the standard confidence interval when the data happen to strongly contradict the prior information. Kabaila and Giri (2009) present a new method for finding such a confidence interval. Let denote the least squares estimator of . Also let and . Using computations and new theoretical results, we show that the performance of this confidence interval improves as increases and decreases.  相似文献   

4.
A reduced ‐statistic is a ‐statistic with its summands drawn from a restricted but balanced set of pairs. In this article, central limit theorems are derived for reduced ‐statistics under ‐mixing, which significantly extends the work of Brown & Kildea in various aspects. It will be shown and illustrated that reduced ‐statistics are quite useful in deriving test statistics in various nonparametric testing problems.  相似文献   

5.
A joint estimation approach for multiple high‐dimensional Gaussian copula graphical models is proposed, which achieves estimation robustness by exploiting non‐parametric rank‐based correlation coefficient estimators. Although we focus on continuous data in this paper, the proposed method can be extended to deal with binary or mixed data. Based on a weighted minimisation problem, the estimators can be obtained by implementing second‐order cone programming. Theoretical properties of the procedure are investigated. We show that the proposed joint estimation procedure leads to a faster convergence rate than estimating the graphs individually. It is also shown that the proposed procedure achieves an exact graph structure recovery with probability tending to 1 under certain regularity conditions. Besides theoretical analysis, we conduct numerical simulations to compare the estimation performance and graph recovery performance of some state‐of‐the‐art methods including both joint estimation methods and estimation methods for individuals. The proposed method is then applied to a gene expression data set, which illustrates its practical usefulness.  相似文献   

6.
There is an emerging need to advance linear mixed model technology to include variable selection methods that can simultaneously choose and estimate important effects from a potentially large number of covariates. However, the complex nature of variable selection has made it difficult for it to be incorporated into mixed models. In this paper we extend the well known class of penalties and show that they can be integrated succinctly into a linear mixed model setting. Under mild conditions, the estimator obtained from this mixed model penalised likelihood is shown to be consistent and asymptotically normally distributed. A simulation study reveals that the extended family of penalties achieves varying degrees of estimator shrinkage depending on the value of one of its parameters. The simulation study also shows there is a link between the number of false positives detected and the number of true coefficients when using the same penalty. This new mixed model variable selection (MMVS) technology was applied to a complex wheat quality data set to determine significant quantitative trait loci (QTL).  相似文献   

7.
This paper deals with the study of dependencies between two given events modelled by point processes. In particular, we focus on the context of DNA to detect favoured or avoided distances between two given motifs along a genome suggesting possible interactions at a molecular level. For this, we naturally introduce a so‐called reproduction function h that allows to quantify the favoured positions of the motifs and that is considered as the intensity of a Poisson process. Our first interest is the estimation of this function h assumed to be well localized. The estimator based on random thresholds achieves an oracle inequality. Then, minimax properties of on Besov balls are established. Some simulations are provided, proving the good practical behaviour of our procedure. Finally, our method is applied to the analysis of the dependence between promoter sites and genes along the genome of the Escherichia coli bacterium.  相似文献   

8.
The starting point in uncertainty quantification is a stochastic model, which is fitted to a technical system in a suitable way, and prediction of uncertainty is carried out within this stochastic model. In any application, such a model will not be perfect, so any uncertainty quantification from such a model has to take into account the inadequacy of the model. In this paper, we rigorously show how the observed data of the technical system can be used to build a conservative non‐asymptotic confidence interval on quantiles related to experiments with the technical system. The construction of this confidence interval is based on concentration inequalities and order statistics. An asymptotic bound on the length of this confidence interval is presented. Here we assume that engineers use more and more of their knowledge to build models with order of errors bounded by . The results are illustrated by applying the newly proposed approach to real and simulated data.  相似文献   

9.
Estimation of time‐average variance constant (TAVC), which is the asymptotic variance of the sample mean of a dependent process, is of fundamental importance in various fields of statistics. For frequentists, it is crucial for constructing confidence interval of mean and serving as a normalizing constant in various test statistics and so forth. For Bayesians, it is widely used for evaluating effective sample size and conducting convergence diagnosis in Markov chain Monte Carlo method. In this paper, by considering high‐order corrections to the asymptotic biases, we develop a new class of TAVC estimators that enjoys optimal ‐convergence rates under different degrees of the serial dependence of stochastic processes. The high‐order correction procedure is applicable to estimation of the so‐called smoothness parameter, which is essential in determining the optimal bandwidth. Comparisons with existing TAVC estimators are comprehensively investigated. In particular, the proposed optimal high‐order corrected estimator has the best performance in terms of mean squared error.  相似文献   

10.
For modelling the location of pyramidal cells in the human cerebral cortex, we suggest a hierarchical point process in that exhibits anisotropy in the form of cylinders extending along the z-axis. The model consists first of a generalised shot noise Cox process for the xy-coordinates, providing cylindrical clusters, and next of a Markov random field model for the z-coordinates conditioned on the xy-coordinates, providing either repulsion, aggregation or both within specified areas of interaction. Several cases of these hierarchical point processes are fitted to two pyramidal cell data sets, and of these a final model allowing for both repulsion and attraction between the points seem adequate. We discuss how the final model relates to the so-called minicolumn hypothesis in neuroscience.  相似文献   

11.
We consider in this paper the semiparametric mixture of two unknown distributions equal up to a location parameter. The model is said to be semiparametric in the sense that the mixed distribution is not supposed to belong to a parametric family. To insure the identifiability of the model, it is assumed that the mixed distribution is zero symmetric, the model being then defined by the mixing proportion, two location parameters and the probability density function of the mixed distribution. We propose a new class of M‐estimators of these parameters based on a Fourier approach and prove that they are ‐consistent under mild regularity conditions. Their finite sample properties are illustrated by a Monte Carlo study, and a benchmark real dataset is also studied with our method.  相似文献   

12.
We discuss a class of difference‐based estimators for the autocovariance in nonparametric regression when the signal is discontinuous and the errors form a stationary m‐dependent process. These estimators circumvent the particularly challenging task of pre‐estimating such an unknown regression function. We provide finite‐sample expressions of their mean squared errors for piecewise constant signals and Gaussian errors. Based on this, we derive biased‐optimized estimates that do not depend on the unknown autocovariance structure. Notably, for positively correlated errors, that part of the variance of our estimators that depend on the signal is minimal as well. Further, we provide sufficient conditions for ‐consistency; this result is extended to piecewise Hölder regression with non‐Gaussian errors. We combine our biased‐optimized autocovariance estimates with a projection‐based approach and derive covariance matrix estimates, a method that is of independent interest. An R package, several simulations and an application to biophysical measurements complement this paper.  相似文献   

13.
In drug development, after completion of phase II proof‐of‐concept trials, the sponsor needs to make a go/no‐go decision to start expensive phase III trials. The probability of statistical success (PoSS) of the phase III trials based on data from earlier studies is an important factor in that decision‐making process. Instead of statistical power, the predictive power of a phase III trial, which takes into account the uncertainty in the estimation of treatment effect from earlier studies, has been proposed to evaluate the PoSS of a single trial. However, regulatory authorities generally require statistical significance in two (or more) trials for marketing licensure. We show that the predictive statistics of two future trials are statistically correlated through use of the common observed data from earlier studies. Thus, the joint predictive power should not be evaluated as a simplistic product of the predictive powers of the individual trials. We develop the relevant formulae for the appropriate evaluation of the joint predictive power and provide numerical examples. Our methodology is further extended to the more complex phase III development scenario comprising more than two (K > 2) trials, that is, the evaluation of the PoSS of at least k0 () trials from a program of K total trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, we consider the problem of estimating the Laplace transform of volatility within a fixed time interval [0,T] using high‐frequency sampling, where we assume that the discretized observations of the latent process are contaminated by microstructure noise. We use the pre‐averaging approach to deal with the effect of microstructure noise. Under the high‐frequency scenario, we obtain a consistent estimator whose convergence rate is , which is known as the optimal convergence rate of the estimation of integrated volatility functionals under the presence of microstructure noise. The related central limit theorem is established. The simulation studies justify the finite‐sample performance of the proposed estimator.  相似文献   

15.
We study estimation and feature selection problems in mixture‐of‐experts models. An $l_2$ ‐penalized maximum likelihood estimator is proposed as an alternative to the ordinary maximum likelihood estimator. The estimator is particularly advantageous when fitting a mixture‐of‐experts model to data with many correlated features. It is shown that the proposed estimator is root‐$n$ consistent, and simulations show its superior finite sample behaviour compared to that of the maximum likelihood estimator. For feature selection, two extra penalty functions are applied to the $l_2$ ‐penalized log‐likelihood function. The proposed feature selection method is computationally much more efficient than the popular all‐subset selection methods. Theoretically it is shown that the method is consistent in feature selection, and simulations support our theoretical results. A real‐data example is presented to demonstrate the method. The Canadian Journal of Statistics 38: 519–539; 2010 © 2010 Statistical Society of Canada  相似文献   

16.
We study a Bayesian analysis of the proportional hazards model with time‐varying coefficients. We consider two priors for time‐varying coefficients – one based on B‐spline basis functions and the other based on Gamma processes – and we use a beta process prior for the baseline hazard functions. We show that the two priors provide optimal posterior convergence rates (up to the term) and that the Bayes factor is consistent for testing the assumption of the proportional hazards when the two priors are used for an alternative hypothesis. In addition, adaptive priors are considered for theoretical investigation, in which the smoothness of the true function is assumed to be unknown, and prior distributions are assigned based on B‐splines.  相似文献   

17.
What is the interpretation of a confidence interval following estimation of a Box-Cox transformation parameter λ? Several authors have argued that confidence intervals for linear model parameters ψ can be constructed as if λ. were known in advance, rather than estimated, provided the estimand is interpreted conditionally given $\hat \lambda$. If the estimand is defined as $\psi \left( {\hat \lambda } \right)$, a function of the estimated transformation, can the nominal confidence level be regarded as a conditional coverage probability given $\hat \lambda$, where the interval is random and the estimand is fixed? Or should it be regarded as an unconditional probability, where both the interval and the estimand are random? This article investigates these questions via large-n approximations, small- σ approximations, and simulations. It is shown that, when model assumptions are satisfied and n is large, the nominal confidence level closely approximates the conditional coverage probability. When n is small, this conditional approximation is still good for regression models with small error variance. The conditional approximation can be poor for regression models with moderate error variance and single-factor ANOVA models with small to moderate error variance. In these situations the nominal confidence level still provides a good approximation for the unconditional coverage probability. This suggests that, while the estimand may be interpreted conditionally, the confidence level should sometimes be interpreted unconditionally.  相似文献   

18.
We propose a new method for risk‐analytic benchmark dose (BMD) estimation in a dose‐response setting when the responses are measured on a continuous scale. For each dose level d, the observation X(d) is assumed to follow a normal distribution: . No specific parametric form is imposed upon the mean μ(d), however. Instead, nonparametric maximum likelihood estimates of μ(d) and σ are obtained under a monotonicity constraint on μ(d). For purposes of quantitative risk assessment, a ‘hybrid’ form of risk function is defined for any dose d as R(d) = P[X(d) < c], where c > 0 is a constant independent of d. The BMD is then determined by inverting the additional risk functionRA(d) = R(d) ? R(0) at some specified value of benchmark response. Asymptotic theory for the point estimators is derived, and a finite‐sample study is conducted, using both real and simulated data. When a large number of doses are available, we propose an adaptive grouping method for estimating the BMD, which is shown to have optimal mean integrated squared error under appropriate designs.  相似文献   

19.
20.
The kth ( 1<k 2) power expectile regression (ER) can balance robustness and effectiveness between the ordinary quantile regression and ER simultaneously. Motivated by a longitudinal ACTG 193A data with nonignorable dropouts, we propose a two-stage estimation procedure and statistical inference methods based on the kth power ER and empirical likelihood to accommodate both the within-subject correlations and nonignorable dropouts. Firstly, we construct the bias-corrected generalized estimating equations by combining the kth power ER and inverse probability weighting approaches. Subsequently, the generalized method of moments is utilized to estimate the parameters in the nonignorable dropout propensity based on sufficient instrumental estimating equations. Secondly, in order to incorporate the within-subject correlations under an informative working correlation structure, we borrow the idea of quadratic inference function to obtain the improved empirical likelihood procedures. The asymptotic properties of the corresponding estimators and their confidence regions are derived. The finite-sample performance of the proposed estimators is studied through simulation and an application to the ACTG 193A data is also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号