首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
This article considers analyzing longitudinal binary data semiparametrically and proposing GEE-Smoothing spline in the estimation of parametric and nonparametric components. The method is an extension of the parametric generalized estimating equation to semiparametric. The nonparametric component is estimated by smoothing spline approach, i.e., natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. Properties of the estimators are evaluated by simulation.  相似文献   

3.
Consider the semiparametric regression model Yi = x′iβ +g(ti)+ei for i=1,2, …,n. Here the design points (xi,ti) are known and nonrandom and the ei are iid random errors with Ee1 = 0 and Ee2 1 = α2<∞. Based on g(.) approximated by a B-spline function, we consider using atest statistic for testing H0 : β = 0. Meanwhile, an adaptive parametric test statistic is constructed and a large sample study for this adaptive parametric test statistic is presented.  相似文献   

4.
Let μ be an infinitely divisible positive measure on R. If the measure ρμ is such that x-2μ(dx)—ρμ({0})δ0(dx)] is the Lévy measure associated with μ and is infinitely divisible, we consider for all positive reals α and β the measure Tα,β(μ) which is the convolution of μ*α and ρμ*β. For example, if μ is the inverse Gaussian law, then ρμ is a gamma law with paramter 3/2. Then Tα,β(μ) is an extension of the Lindsay transform of the first order, restricted to the distributions which are infinitely divisible. The main aim of this paper is to point out that it is possible to apply this transformation to all natural exponential families (NEF) with strictly cubic variance functions P. We then obtain NEF with variance functions of the form □ΔP(□Δ), where A is an affine function of the mean of the NEF. Some of these latter types appear scattered in the literature.  相似文献   

5.
Robust automatic selection techniques for the smoothing parameter of a smoothing spline are introduced. They are based on a robust predictive error criterion and can be viewed as robust versions of C p and cross-validation. They lead to smoothing splines which are stable and reliable in terms of mean squared error over a large spectrum of model distributions.  相似文献   

6.
The performance of nine different nonparametric regression estimates is empirically compared on ten different real datasets. The number of data points in the real datasets varies between 7, 900 and 18, 000, where each real dataset contains between 5 and 20 variables. The nonparametric regression estimates include kernel, partitioning, nearest neighbor, additive spline, neural network, penalized smoothing splines, local linear kernel, regression trees, and random forests estimates. The main result is a table containing the empirical L2 risks of all nine nonparametric regression estimates on the evaluation part of the different datasets. The neural networks and random forests are the two estimates performing best. The datasets are publicly available, so that any new regression estimate can be easily compared with all nine estimates considered in this article by just applying it to the publicly available data and by computing its empirical L2 risks on the evaluation part of the datasets.  相似文献   

7.
Abstract. We propose a non‐linear density estimator, which is locally adaptive, like wavelet estimators, and positive everywhere, without a log‐ or root‐transform. This estimator is based on maximizing a non‐parametric log‐likelihood function regularized by a total variation penalty. The smoothness is driven by a single penalty parameter, and to avoid cross‐validation, we derive an information criterion based on the idea of universal penalty. The penalized log‐likelihood maximization is reformulated as an ?1‐penalized strictly convex programme whose unique solution is the density estimate. A Newton‐type method cannot be applied to calculate the estimate because the ?1‐penalty is non‐differentiable. Instead, we use a dual block coordinate relaxation method that exploits the problem structure. By comparing with kernel, spline and taut string estimators on a Monte Carlo simulation, and by investigating the sensitivity to ties on two real data sets, we observe that the new estimator achieves good L 1 and L 2 risk for densities with sharp features, and behaves well with ties.  相似文献   

8.
This paper deals with √n-consistent estimation of the parameter μ in the RCAR(l) model defined by the difference equation Xj=(μ+Uj)Xj-l+ej (jε Z), where {ej: jε Z} and {Uj: jε Z} are two independent sets of i.i.d. random variables with zero means, positive finite variances and E[(μ+U1)2] < 1. A class of asymptotically normal estimators of μ indexed by a family of bounded measurable functions is introduced. Then an estimator is constructed which is asymptotically equivalent to the best estimator in that class. This estimator, asymptotically equivalent to the quasi-maximum likelihood estimator derived in Nicholls & Quinn (1982), is much simpler to calculate and is asymptotically normal without the additional moment conditions those authors impose.  相似文献   

9.
ABSTRACT

This article considers nonparametric regression problems and develops a model-averaging procedure for smoothing spline regression problems. Unlike most smoothing parameter selection studies determining an optimum smoothing parameter, our focus here is on the prediction accuracy for the true conditional mean of Y given a predictor X. Our method consists of two steps. The first step is to construct a class of smoothing spline regression models based on nonparametric bootstrap samples, each with an appropriate smoothing parameter. The second step is to average bootstrap smoothing spline estimates of different smoothness to form a final improved estimate. To minimize the prediction error, we estimate the model weights using a delete-one-out cross-validation procedure. A simulation study has been performed by using a program written in R. The simulation study provides a comparison of the most well known cross-validation (CV), generalized cross-validation (GCV), and the proposed method. This new method is straightforward to implement, and gives reliable performances in simulations.  相似文献   

10.
The purpose of this research are: (1) to obtain spline function estimation in non parametric regression for longitudinal data with and without considering the autocorrelation between data of observation within subject, (2) to develop the algorithm that generates simulation data with certain autocorrelation level based on size of sample (N) and error variance (EV), and (3) to establish shape of spline estimator in non parametric regression for longitudinal data to simulation with various level of autocorrelation, as well as compare DM and TM approaches in predicting spline estimator in the data simulation with different of autocorrelation observational data on within subject. The results of the application are as follows: (a) implementation of smoothing spline with penalized weighted least square (PWLS) approach with or without consideration of autocorrelation in general (in all sizes and all error variances levels) provides significantly different spline estimator when the autocorrelation level >0.8; (b) based on size comparison, spline estimator in non parametric regression smoothing spline with PLS approach with (DM), or without (DM) consideration of autocorrelation showed significantly different result in level of autocorrelation > 0.8 (in overall size, moderate and large sample size), and > 0.7 (in small sample size); (c) based on level of variance, spline estimator in non parametric regression smoothing spline with PLS approach with (DM), or without (DM) consideration of autocorrelation showed significantly different result in level of autocorrelation > 0.8 (in overall level of variance, moderate and large variance), and > 0.7 (in small variance).  相似文献   

11.
In nonparametric regression the smoothing parameter can be selected by minimizing a Mean Squared Error (MSE) based criterion. For spline smoothing one can also rewrite the smooth estimation as a Linear Mixed Model where the smoothing parameter appears as the a priori variance of spline basis coefficients. This allows to employ Maximum Likelihood (ML) theory to estimate the smoothing parameter as variance component. In this paper the relation between the two approaches is illuminated for penalized spline smoothing (P-spline) as suggested in Eilers and Marx Statist. Sci. 11(2) (1996) 89. Theoretical and empirical arguments are given showing that the ML approach is biased towards undersmoothing, i.e. it chooses a too complex model compared to the MSE. The result is in line with classical spline smoothing, even though the asymptotic arguments are different. This is because in P-spline smoothing a finite dimensional basis is employed while in classical spline smoothing the basis grows with the sample size.  相似文献   

12.
Abstract

This paper considers a partially non linear model E(Y|X, z, t) = f(X, β) + zTg(t) and gives its T-type estimate, which is a weighted quasi-likelihood estimate using sieve method and can be obtained by EM algorithm. The influence functions and asymptotic properties of T-type estimate (consistency and asymptotic normality) are discussed, and convergence rate of both parametric and non parametric components are obtained. Simulation results show the shape of influence functions and prove that the T-type estimate performs quite well. The proposed estimate is also applied to a data set and compared with the least square estimate and least absolute deviation estimate.  相似文献   

13.
The bootstrap variance estimate is widely used in semiparametric inferences. However, its theoretical validity is a well‐known open problem. In this paper, we provide a first theoretical study on the bootstrap moment estimates in semiparametric models. Specifically, we establish the bootstrap moment consistency of the Euclidean parameter, which immediately implies the consistency of t‐type bootstrap confidence set. It is worth pointing out that the only additional cost to achieve the bootstrap moment consistency in contrast with the distribution consistency is to simply strengthen the L1 maximal inequality condition required in the latter to the Lp maximal inequality condition for p≥1. The general Lp multiplier inequality developed in this paper is also of independent interest. These general conclusions hold for the bootstrap methods with exchangeable bootstrap weights, for example, non‐parametric bootstrap and Bayesian bootstrap. Our general theory is illustrated in the celebrated Cox regression model.  相似文献   

14.
A method for nonparametric estimation of density based on a randomly censored sample is presented. The density is expressed as a linear combination of cubic M -splines, and the coefficients are determined by pseudo-maximum-likelihood estimation (likelihood is maximized conditionally on data-dependent knots). By using regression splines (small number of knots) it is possible to reduce the estimation problem to a space of low dimension while preserving flexibility, thus striking a compromise between parametric approaches and ordinary nonparametric approaches based on spline smoothing. The number of knots is determined by the minimum AIC. Examples of simulated and real data are presented. Asymptotic theory and the bootstrap indicate that the precision and the accuracy of the estimates are satisfactory.  相似文献   

15.
This paper considers the problem of combining k unbiased estimates, x i of a parameter,μ, where each estimate, x i is the average of n i + l independent normal observations with unknown mean, μ, and unknown variance, σ i 2. The behavior of several commonly used estimators of μ is studied by means of an empirical sampling study, and the empirical results of this experiment are interpreted in terms of previous theoretical results. Finally, some extrapolations are made as to how these estimators might behave under varying conditions, and some new estimators are proposed which might have higher efficiencies under certain conditions than those which are generally used.  相似文献   

16.
Let x be a random variable having the normal distribution with mean μ and variance c2μ2, where c is a known constant. The maximum likelihood estimation of μ when the lowest r1 and the highest r2 sample values censored have been given the asymptotic variance of the maximum likelihood estimator is obtained.  相似文献   

17.
Rasul A. Khan 《Statistics》2015,49(3):705-710
Let X1, X2, …, Xn be iid N(μ, aμ2) (a>0) random variables with an unknown mean μ>0 and known coefficient of variation (CV) √a. The estimation of μ is revisited and it is shown that a modified version of an unbiased estimator of μ [cf. Khan RA. A note on estimating the mean of a normal distribution with known CV. J Am Stat Assoc. 1968;63:1039–1041] is more efficient. A certain linear minimum mean square estimator of Gleser and Healy [Estimating the mean of a normal distribution with known CV. J Am Stat Assoc. 1976;71:977–981] is also modified and improved. These improved estimators are being compared with the maximum likelihood estimator under squared-error loss function. Based on asymptotic consideration, a large sample confidence interval is also mentioned.  相似文献   

18.
Two consistent nonexact-confidence-interval estimation methods, both derived from the consistency-equivalence theorem in Plante (1991), are suggested for estimation of problematic parametric functions with no consistent exact solution and for which standard optimal confidence procedures are inadequate or even absurd, i.e., can provide confidence statements with a 95% empty or all-inclusive confidence set. A belt C(·) from a consistent nonexact-belt family, used with two confidence coefficients (γ = infθ Pθ [ θ ? C(X)] and γ+ = supθ Pθ[θ ? C(X)], is shown to provide a consistent nonexact-belt solution for estimating μ21 in the Behrens-Fisher problem. A rule for consistent behaviour enables any confidence belt to be used consistently by providing each sample point with best upper and lower confidence levels [δ+(x) ≥ γ+, δ(x) ≤ γ], which give least-conservative consistent confidence statements ranging from practically exact through informative to noninformative. The rule also provides a consistency correction L(x) = δ+(x)-δ(X) enabling alternative confidence solutions to be compared on grounds of adequacy; this is demonstrated by comparing consistent conservative sample-point-wise solutions with inconsistent standard solutions for estimating μ21 (Creasy-Fieller-Neyman problem) and $\sqrt {\mu _1^2 + \mu _2^2 }$, a distance-estimation problem closely related to Stein's 1959 example  相似文献   

19.
We study the problem of testing: H0 : μ ∈ P against H1 : μ ? P, based on a random sample of N observations from a p-dimensional normal distribution Np(μ, Σ) with Σ > 0 and P a closed convex positively homogeneous set. We develop the likelihood-ratio test (LRT) for this problem. We show that the union-intersection principle leads to a test equivalent to the LRT. It also gives a large class of tests which are shown to be admissible by Stein's theorem (1956). Finally, we give the α-level cutoff points for the LRT.  相似文献   

20.
It is an elementary fact that the size of an orthogonal array of strength t on k factors must be a multiple of a certain number, say Lt, that depends on the orders of the factors. Thus Lt is a lower bound on the size of arrays of strength t on those factors, and is no larger than Lk, the size of the complete factorial design. We investigate the relationship between the numbers Lt, and two questions in particular: For what t is Lt < Lk? And when Lt = Lk, is the complete factorial design the only array of that size and strength t? Arrays are assumed to be mixed-level.

We refer to an array of size less than Lk as a proper fraction. Guided by our main result, we construct a variety of mixed-level proper fractions of strength k ? 1 that also satisfy a certain group-theoretic condition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号