首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The smoothing spline method is used to fit a curve to a noisy data set, where selection of the smoothing parameter is essential. An adaptive Cp criterion (Chen and Huang 2011 Chen, C. S., and H. C. Huang. 2011. An improved Cp criterion for spline smoothing. Journal of Statistical Planning and Inference 141:44552.[Crossref], [Web of Science ®] [Google Scholar]) based on the Stein’s unbiased risk estimate has been proposed to select the smoothing parameter, which not only considers the usual effective degrees of freedom but also takes into account the selection variability. The resulting fitted curve has been shown to be superior and more stable than commonly used selection criteria and possesses the same asymptotic optimality as Cp. In this paper, we further discuss some characteristics on the selection of smoothing parameter, especially for the selection variability.  相似文献   

2.
The completeness of the induced distribution of the convex hull of a random set of points drawn uniformly from a convex region seems not to have been noticed before. The result generalizes a well-known result for dimension one. As a consequence, there exists a theory of best unbiased estimation for certain functionals of the convex region. For example, a best estimator of the centroid of the convex region can be constructed. This estimator is distinct from the centroid of the convex hull. Therefore another generalization of a property holding in dimension one, namely the unbiased-ness of the centroid of the convex hull, is seen to fail.  相似文献   

3.
The mixed effects models with two variance components are often used to analyze longitudinal data. For these models, we compare two approaches to estimating the variance components, the analysis of variance approach and the spectral decomposition approach. We establish a necessary and sufficient condition for the two approaches to yield identical estimates, and some sufficient conditions for the superiority of one approach over the other, under the mean squared error criterion. Applications of the methods to circular models and longitudinal data are discussed. Furthermore, simulation results indicate that better estimates of variance components do not necessarily imply higher power of the tests or shorter confidence intervals.  相似文献   

4.
The Best Linear Unbiased Predictor (BLUP) in mixed models is a function of the variance components and they are estimated using maximum likelihood (ML) or restricted ML methods. Nonconvergence of BLUP would occur due to a drawback of the standard likelihood-based approaches. In such situations, ML and REML either do not provide any BLUPs or all become equal. To overcome this drawback, we provide a generalized estimate (GE) of BLUP that does not suffer from the problem of negative or zero variance components, and compare its performance against the ML and REML estimates of BLUP. Simulated and published data are used to compare BLUP.  相似文献   

5.
Competing risks often occur when subjects may fail from one of several mutually exclusive causes. For example, when a patient suffering a cancer may die from other cause, we are interested in the effect of a certain covariate on the probability of dying of cancer at a certain time. Several approaches have been suggested to analyse competing risk data in the presence of complete information of failure cause. In this paper, our interest is to consider the occurrence of missing causes as well as interval censored failure time. There exist no method to discuss this problem. We applied a Klein–Andersen's pseudo-value approach [Klein, JP Andersen PK. Regression modeling of competing risks data based on pseudovalues of the cumulative incidence function. Biometrics. 2005;61:223–229] based on the estimated cumulative incidence function and a regression coefficient is estimated through a multiple imputation. We evaluate the suggested method by comparing with a complete case analysis in several simulation settings.  相似文献   

6.
This paper eals with the proplem on estimating the mean paramerer of a truncated normal distribution with known coefficient of variation. In the previous treatment of this problem most authors have used the sample standared deviation for estimating this parameter. In the present paper we use Gini’s coefficient of mean difference g and obtain the minimum variance unbiased estimate of the mean based on a linear function of the sample mean and g, It is shown that this new estimate has desirable properties for small samples as well as for large samples. We also give a numerical example.  相似文献   

7.
Spline smoothing is a popular technique for curve fitting, in which selection of the smoothing parameter is crucial. Many methods such as Mallows’ Cp, generalized maximum likelihood (GML), and the extended exponential (EE) criterion have been proposed to select this parameter. Although Cp is shown to be asymptotically optimal, it is usually outperformed by other selection criteria for small to moderate sample sizes due to its high variability. On the other hand, GML and EE are more stable than Cp, but they do not possess the same asymptotic optimality as Cp. Instead of selecting this smoothing parameter directly using Cp, we propose to select among a small class of selection criteria based on Stein's unbiased risk estimate (SURE). Due to the selection effect, the spline estimate obtained from a criterion in this class is nonlinear. Thus, the effective degrees of freedom in SURE contains an adjustment term in addition to the trace of the smoothing matrix, which cannot be ignored in small to moderate sample sizes. The resulting criterion, which we call adaptive Cp, is shown to have an analytic expression, and hence can be efficiently computed. Moreover, adaptive Cp is not only demonstrated to be superior and more stable than commonly used selection criteria in a simulation study, but also shown to possess the same asymptotic optimality as Cp.  相似文献   

8.
In varying-coefficient models, an important question is to determine whether some of the varying coefficients are actually invariant coefficients. This article proposes a penalized likelihood method in the framework of the smoothing spline ANOVA models, with a penalty designed toward the goal of automatically distinguishing varying coefficients and those which are not varying. Unlike the stepwise procedure, the method simultaneously quantifies and estimates the coefficients. An efficient algorithm is given and ways of choosing the smoothing parameters are discussed. Simulation results and an analysis on the Boston housing data illustrate the usefulness of the method. The proposed approach is further extended to longitudinal data analysis.  相似文献   

9.
Abstract.  For the problem of estimating a sparse sequence of coefficients of a parametric or non-parametric generalized linear model, posterior mode estimation with a Subbotin( λ , ν ) prior achieves thresholding and therefore model selection when ν   ∈    [0,1] for a class of likelihood functions. The proposed estimator also offers a continuum between the (forward/backward) best subset estimator ( ν  =  0 ), its approximate convexification called lasso ( ν  =  1 ) and ridge regression ( ν  =  2 ). Rather than fixing ν , selecting the two hyperparameters λ and ν adds flexibility for a better fit, provided both are well selected from the data. Considering first the canonical Gaussian model, we generalize the Stein unbiased risk estimate, SURE( λ , ν ), to the situation where the thresholding function is not almost differentiable (i.e. ν    1 ). We then propose a more general selection of λ and ν by deriving an information criterion that can be employed for instance for the lasso or wavelet smoothing. We investigate some asymptotic properties in parametric and non-parametric settings. Simulations and applications to real data show excellent performance.  相似文献   

10.
Abstract. First, to test the existence of random effects in semiparametric mixed models (SMMs) under only moment conditions on random effects and errors, we propose a very simple and easily implemented non‐parametric test based on a difference between two estimators of the error variance. One test is consistent only under the null and the other can be so under both the null and alternatives. Instead of erroneously solving the non‐standard two‐sided testing problem, as in most papers in the literature, we solve it correctly and prove that the asymptotic distribution of our test statistic is standard normal. This avoids Monte Carlo approximations to obtain p ‐values, as is needed for many existing methods, and the test can detect local alternatives approaching the null at rates up to root n. Second, as the higher moments of the error are necessarily estimated because the standardizing constant involves these quantities, we propose a general method to conveniently estimate any moments of the error. Finally, a simulation study and a real data analysis are conducted to investigate the properties of our procedures.  相似文献   

11.
12.
Modelling volatility in the form of conditional variance function has been a popular method mainly due to its application in financial risk management. Among others, we distinguish the parametric GARCH models and the nonparametric local polynomial approximation using weighted least squares or gaussian likelihood function. We introduce an alternative likelihood estimate of conditional variance and we show that substitution of the error density with its estimate yields similar asymptotic properties, that is, the proposed estimate is adaptive to the error distribution. Theoretical comparison with existing estimates reveals substantial gains in efficiency, especially if error distribution has fatter tails than Gaussian distribution. Simulated data confirm the theoretical findings while an empirical example demonstrates the gains of the proposed estimate.  相似文献   

13.
There are situations in the analysis of failure time or lifetime data where the censoring times of unfailed units are missing. The non-parametric estimator of the lifetime distribution for such data is available in literature. In this paper we consider an extension of this situation to the univariate and bivariate competing risk setups. The maximum likelihood and simple moment estimators of cause specific distribution functions in both univariate and bivariate situations are developed. A simulation study is carried out to assess the performance of the estimators. Finally, we illustrate the method with real data set.  相似文献   

14.
15.
In this paper, the preliminary test approach to the estimation of the linear regression model with student's t errors is considered. The preliminary test almost unbiased two-parameter estimator is proposed, when it is suspected that the regression parameter may be restricted to a constraint. The quadratic biases and quadratic risks of the proposed estimators are derived and compared under both null and alternative hypotheses. The conditions of superiority of the proposed estimators for departure parameter and biasing parameters k and d are derived, respectively. Furthermore, a real data example and a Monte Carlo simulation study are provided to illustrate some of the theoretical results.  相似文献   

16.
The current regulation of non-carcinogenic effects has generally been based on dividing a safety factor into an experimental no-observed-effect-level (NOEL), giving a regulatory reference dose (RfD). This approach does not attempt to estimate the risk as a function of dose; it assumes no significant risk for the dose below the RfD. This paper proposes a mathematical model for finding the upper confidence limit on risk and lower confidence limit on dose for quantitative risk assessment when the responses follow a normal distribution. The proposed procedure appears to be conservative; this is supported by results of a simulation study. The procedure is illustrated by application to real data.  相似文献   

17.
Time to failure due to fatigue is one of the common quality characteristics in material engineering applications. In this article, acceptance sampling plans are developed for the Birnbaum–Saunders distribution percentiles when the life test is truncated at a pre-specified time. The minimum sample size necessary to ensure the specified life percentile is obtained under a given customer's risk. The operating characteristic values (and curves) of the sampling plans as well as the producer's risk are presented. The R package named spbsq is developed to implement the developed sampling plans. Two examples with real data sets are also given as illustration.  相似文献   

18.
In this note, we obtain, based on the sample sum, a statistic to test the homogeneity of a random sample from a positive (zero truncated) Lagrangian Poisson distribution given in Consul and Jain (1973). This test statistic conforms, in a special case, to Singh (1978). A goodness-of-fit test statistic for the Borel-Tanner distribution is obtained as a particular case cf our results.  相似文献   

19.
We define the maximum-relevance weighted likelihood estimator (MREWLE) using the relevance-weighted likelihood function introduced by Hu and Zidek (1995). Furthermore, we establish the consistency of the MREWLE under a wide range of conditions. Our results generalize those of Wald (1948) to both nonidentically distributed random variables and unequally weighted likelihoods (when dealing with independent data sets of varying relevance to the inferential problem of interest). Asymptotic normality is also proven. Applying these results to generalized smoothing model is discussed.  相似文献   

20.
Estimators of percentiles of location and scale parameter distributions are optimized based on Pitman closeness and absolute risk. A median unbiased (MU) estimator and a minimum risk (MR) estimator are shown to exist within a class of estimators, and to depend upon the medians of two completely specified distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号