首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary.  We construct approximate confidence intervals for a nonparametric regression function, using polynomial splines with free-knot locations. The number of knots is determined by generalized cross-validation. The estimates of knot locations and coefficients are obtained through a non-linear least squares solution that corresponds to the maximum likelihood estimate. Confidence intervals are then constructed based on the asymptotic distribution of the maximum likelihood estimator. Average coverage probabilities and the accuracy of the estimate are examined via simulation. This includes comparisons between our method and some existing methods such as smoothing spline and variable knots selection as well as a Bayesian version of the variable knots method. Simulation results indicate that our method works well for smooth underlying functions and also reasonably well for discontinuous functions. It also performs well for fairly small sample sizes.  相似文献   

2.
A new algorithm is presented and studied in this paper for fast computation of the nonparametric maximum likelihood estimate of a U-shaped hazard function. It successfully overcomes a difficulty when computing a U-shaped hazard function, which is only properly defined by knowing its anti-mode, and the anti-mode itself has to be found during the computation. Specifically, the new algorithm maintains the constant hazard segment, regardless of its length being zero or positive. The length varies naturally, according to what mass values are allocated to their associated knots after each updating. Being an appropriate extension of the constrained Newton method, the new algorithm also inherits its advantage of fast convergence, as demonstrated by some real-world data examples. The algorithm works not only for exact observations, but also for purely interval-censored data, and for data mixed with exact and interval-censored observations.  相似文献   

3.
A new method for estimating the proportion of null effects is proposed for solving large-scale multiple comparison problems. It utilises maximum likelihood estimation of nonparametric mixtures, which also provides a density estimate of the test statistics. It overcomes the problem of the usual nonparametric maximum likelihood estimator that cannot produce a positive probability at the location of null effects in the process of estimating nonparametrically a mixing distribution. The profile likelihood is further used to help produce a range of null proportion values, corresponding to which the density estimates are all consistent. With a proper choice of a threshold function on the profile likelihood ratio, the upper endpoint of this range can be shown to be a consistent estimator of the null proportion. Numerical studies show that the proposed method has an apparently convergent trend in all cases studied and performs favourably when compared with existing methods in the literature.  相似文献   

4.
It is well-known that the nonparametric maximum likelihood estimator (NPMLE) of a survival function may severely underestimate the survival probabilities at very early times for left truncated data. This problem might be overcome by instead computing a smoothed nonparametric estimator (SNE) via the EMS algorithm. The close connection between the SNE and the maximum penalized likelihood estimator is also established. Extensive Monte Carlo simulations demonstrate the superior performance of the SNE over that of the NPMLE, in terms of either bias or variance, even for moderately large Samples. The methodology is illustrated with an application to the Massachusetts Health Care Panel Study dataset to estimate the probability of being functionally independent for non-poor male and female groups rcspectively.  相似文献   

5.
The authors define a class of “partially linear single‐index” survival models that are more flexible than the classical proportional hazards regression models in their treatment of covariates. The latter enter the proposed model either via a parametric linear form or a nonparametric single‐index form. It is then possible to model both linear and functional effects of covariates on the logarithm of the hazard function and if necessary, to reduce the dimensionality of multiple covariates via the single‐index component. The partially linear hazards model and the single‐index hazards model are special cases of the proposed model. The authors develop a likelihood‐based inference to estimate the model components via an iterative algorithm. They establish an asymptotic distribution theory for the proposed estimators, examine their finite‐sample behaviour through simulation, and use a set of real data to illustrate their approach.  相似文献   

6.
Three general algorithms that use different strategies are proposed for computing the maximum likelihood estimate of a semiparametric mixture model. They seek to maximize the likelihood function by, respectively, alternating the parameters, profiling the likelihood and modifying the support set. All three algorithms make a direct use of the recently proposed fast and stable constrained Newton method for computing the nonparametric maximum likelihood of a mixing distribution and employ additionally an optimization algorithm for unconstrained problems. The performance of the algorithms is numerically investigated and compared for solving the Neyman-Scott problem, overcoming overdispersion in logistic regression models and fitting two-level mixed effects logistic regression models. Satisfactory results have been obtained.  相似文献   

7.
This paper focuses on efficient estimation, optimal rates of convergence and effective algorithms in the partly linear additive hazards regression model with current status data. We use polynomial splines to estimate both cumulative baseline hazard function with monotonicity constraint and nonparametric regression functions with no such constraint. We propose a simultaneous sieve maximum likelihood estimation for regression parameters and nuisance parameters and show that the resultant estimator of regression parameter vector is asymptotically normal and achieves the semiparametric information bound. In addition, we show that rates of convergence for the estimators of nonparametric functions are optimal. We implement the proposed estimation through a backfitting algorithm on generalized linear models. We conduct simulation studies to examine the finite‐sample performance of the proposed estimation method and present an analysis of renal function recovery data for illustration.  相似文献   

8.
In survival data analysis, the interval censoring problem has generally been treated via likelihood methods. Because this likelihood is complex, it is often assumed that the censoring mechanisms do not affect the mortality process. The authors specify conditions that ensure the validity of such a simplified likelihood. They prove the equivalence between different characterizations of noninformative censoring and define a constant‐sum condition analogous to the one derived in the context of right censoring. They also prove that when the noninformative or constant‐sum condition holds, the simplified likelihood can be used to obtain the nonparametric maximum likelihood estimator of the death time distribution function.  相似文献   

9.
Mixture cure models are widely used when a proportion of patients are cured. The proportional hazards mixture cure model and the accelerated failure time mixture cure model are the most popular models in practice. Usually the expectation–maximisation (EM) algorithm is applied to both models for parameter estimation. Bootstrap methods are used for variance estimation. In this paper we propose a smooth semi‐nonparametric (SNP) approach in which maximum likelihood is applied directly to mixture cure models for parameter estimation. The variance can be estimated by the inverse of the second derivative of the SNP likelihood. A comprehensive simulation study indicates good performance of the proposed method. We investigate stage effects in breast cancer by applying the proposed method to breast cancer data from the South Carolina Cancer Registry.  相似文献   

10.
We propose a new algorithm for computing the maximum likelihood estimate of a nonparametric survival function for interval-censored data, by extending the recently-proposed constrained Newton method in a hierarchical fashion. The new algorithm makes use of the fact that a mixture distribution can be recursively written as a mixture of mixtures, and takes a divide-and-conquer approach to break down a large-scale constrained optimization problem into many small-scale ones, which can be solved rapidly. During the course of optimization, the new algorithm, which we call the hierarchical constrained Newton method, can efficiently reallocate the probability mass, both locally and globally, among potential support intervals. Its convergence is theoretically established based on an equilibrium analysis. Numerical study results suggest that the new algorithm is the best choice for data sets of any size and for solutions with any number of support intervals.  相似文献   

11.
A method for nonparametric estimation of density based on a randomly censored sample is presented. The density is expressed as a linear combination of cubic M -splines, and the coefficients are determined by pseudo-maximum-likelihood estimation (likelihood is maximized conditionally on data-dependent knots). By using regression splines (small number of knots) it is possible to reduce the estimation problem to a space of low dimension while preserving flexibility, thus striking a compromise between parametric approaches and ordinary nonparametric approaches based on spline smoothing. The number of knots is determined by the minimum AIC. Examples of simulated and real data are presented. Asymptotic theory and the bootstrap indicate that the precision and the accuracy of the estimates are satisfactory.  相似文献   

12.
The paper is focussing on some recent developments in nonparametric mixture distributions. It discusses nonparametric maximum likelihood estimation of the mixing distribution and will emphasize gradient type results, especially in terms of global results and global convergence of algorithms such as vertex direction or vertex exchange method. However, the NPMLE (or the algorithms constructing it) provides also an estimate of the number of components of the mixing distribution which might be not desirable for theoretical reasons or might be not allowed from the physical interpretation of the mixture model. When the number of components is fixed in advance, the before mentioned algorithms can not be used and globally convergent algorithms do not exist up to now. Instead, the EM algorithm is often used to find maximum likelihood estimates. However, in this case multiple maxima are often occuring. An example from a meta-analyis of vitamin A and childhood mortality is used to illustrate the considerable, inferential importance of identifying the correct global likelihood. To improve the behavior of the EM algorithm we suggest a combination of gradient function steps and EM steps to achieve global convergence leading to the EM algorithm with gradient function update (EMGFU). This algorithms retains the number of components to be exactly k and typically converges to the global maximum. The behavior of the algorithm is highlighted at hand of several examples.  相似文献   

13.
Degradation analysis is a useful technique when life tests result in few or even no failures. The degradation measurements are recorded over time and the estimation of time-to-failure distribution plays a vital role in degradation analysis. The parametric method to estimate the time-to-failure distribution assumed a specific parametric model with known shape for the random effects parameter. To avoid any assumption about the model shape, a nonparametric method can be used. In this paper, we suggest to use the nonparametric fourth-order kernel method to estimate the time-to-failure distribution and its percentiles for the simple linear degradation model. The performances of the proposed method are investigated and compared with the classical kernel; maximum likelihood and ordinary least squares methods via simulation technique. The numerical results show the good performance of the fourth-order kernel method and demonstrate its superiority over the parametric method when there is no information about the shape of the random effect parameter distribution.  相似文献   

14.
ABSTRACT

In this paper, we propose modified spline estimators for nonparametric regression models with right-censored data, especially when the censored response observations are converted to synthetic data. Efficient implementation of these estimators depends on the set of knot points and an appropriate smoothing parameter. We use three algorithms, the default selection method (DSM), myopic algorithm (MA), and full search algorithm (FSA), to select the optimum set of knots in a penalized spline method based on a smoothing parameter, which is chosen based on different criteria, including the improved version of the Akaike information criterion (AICc), generalized cross validation (GCV), restricted maximum likelihood (REML), and Bayesian information criterion (BIC). We also consider the smoothing spline (SS), which uses all the data points as knots. The main goal of this study is to compare the performance of the algorithm and criteria combinations in the suggested penalized spline fits under censored data. A Monte Carlo simulation study is performed and a real data example is presented to illustrate the ideas in the paper. The results confirm that the FSA slightly outperforms the other methods, especially for high censoring levels.  相似文献   

15.
The analysis of a sample of curves can be done by self-modelling regression methods. Within this framework we follow the ideas of nonparametric maximum likelihood estimation known from event history analysis and the counting process set-up. We derive an infinite dimensional score equation and from there we suggest an algorithm to estimate the shape function for a simple shape invariant model. The nonparametric maximum likelihood estimator that we find turns out to be a Nadaraya–Watson-like estimator, but unlike in the usual kernel smoothing situation we do not need to select a bandwidth or even a kernel function, since the score equation automatically selects the shape and the smoothing parameter for the estimation. We apply the method to a sample of electrophoretic spectra to illustrate how it works.  相似文献   

16.
Abstract. In this article, a naive empirical likelihood ratio is constructed for a non‐parametric regression model with clustered data, by combining the empirical likelihood method and local polynomial fitting. The maximum empirical likelihood estimates for the regression functions and their derivatives are obtained. The asymptotic distributions for the proposed ratio and estimators are established. A bias‐corrected empirical likelihood approach to inference for the parameters of interest is developed, and the residual‐adjusted empirical log‐likelihood ratio is shown to be asymptotically chi‐squared. These results can be used to construct a class of approximate pointwise confidence intervals and simultaneous bands for the regression functions and their derivatives. Owing to our bias correction for the empirical likelihood ratio, the accuracy of the obtained confidence region is not only improved, but also a data‐driven algorithm can be used for selecting an optimal bandwidth to estimate the regression functions and their derivatives. A simulation study is conducted to compare the empirical likelihood method with the normal approximation‐based method in terms of coverage accuracies and average widths of the confidence intervals/bands. An application of this method is illustrated using a real data set.  相似文献   

17.
Two‐stage design is very useful in clinical trials for evaluating the validity of a specific treatment regimen. When the second stage is allowed to continue, the method used to estimate the response rate based on the results of both stages is critical for the subsequent design. The often‐used sample proportion has an evident upward bias. However, the maximum likelihood estimator or the moment estimator tends to underestimate the response rate. A mean‐square error weighted estimator is considered here; its performance is thoroughly investigated via Simon's optimal and minimax designs and Shuster's design. Compared with the sample proportion, the proposed method has a smaller bias, and compared with the maximum likelihood estimator, the proposed method has a smaller mean‐square error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
Two‐stage designs are widely used to determine whether a clinical trial should be terminated early. In such trials, a maximum likelihood estimate is often adopted to describe the difference in efficacy between the experimental and reference treatments; however, this method is known to display conditional bias. To reduce such bias, a conditional mean‐adjusted estimator (CMAE) has been proposed, although the remaining bias may be nonnegligible when a trial is stopped for efficacy at the interim analysis. We propose a new estimator for adjusting the conditional bias of the treatment effect by extending the idea of the CMAE. This estimator is calculated by weighting the maximum likelihood estimate obtained at the interim analysis and the effect size prespecified when calculating the sample size. We evaluate the performance of the proposed estimator through analytical and simulation studies in various settings in which a trial is stopped for efficacy or futility at the interim analysis. We find that the conditional bias of the proposed estimator is smaller than that of the CMAE when the information time at the interim analysis is small. In addition, the mean‐squared error of the proposed estimator is also smaller than that of the CMAE. In conclusion, we recommend the use of the proposed estimator for trials that are terminated early for efficacy or futility.  相似文献   

19.
Abstract: The authors address the problem of estimating an inter‐event distribution on the basis of count data. They derive a nonparametric maximum likelihood estimate of the inter‐event distribution utilizing the EM algorithm both in the case of an ordinary renewal process and in the case of an equilibrium renewal process. In the latter case, the iterative estimation procedure follows the basic scheme proposed by Vardi for estimating an inter‐event distribution on the basis of time‐interval data; it combines the outputs of the E‐step corresponding to the inter‐event distribution and to the length‐biased distribution. The authors also investigate a penalized likelihood approach to provide the proposed estimation procedure with regularization capabilities. They evaluate the practical estimation procedure using simulated count data and apply it to real count data representing the elongation of coffee‐tree leafy axes.  相似文献   

20.
This paper addresses the estimation for the unknown scale parameter of the half-logistic distribution based on a Type-I progressively hybrid censoring scheme. We evaluate the maximum likelihood estimate (MLE) via numerical method, and EM algorithm, and also the approximate maximum likelihood estimate (AMLE). We use a modified acceptance rejection method to obtain the Bayes estimate and corresponding highest posterior confidence intervals. We perform Monte Carlo simulations to compare the performances of the different methods, and we analyze one dataset for illustrative purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号