共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In this article, we propose a nonparametric estimator for percentiles of the time-to-failure distribution obtained from a linear degradation model using the kernel density method. The properties of the proposed kernel estimator are investigated and compared with well-known maximum likelihood and ordinary least squares estimators via a simulation technique. The mean squared error and the length of the bootstrap confidence interval are used as the basis criteria of the comparisons. The simulation study shows that the performance of the kernel estimator is acceptable as a general estimator. When the distribution of the data is assumed to be known, the maximum likelihood and ordinary least squares estimators perform better than the kernel estimator, while the kernel estimator is superior when the assumption of our knowledge of the data distribution is violated. A comparison among different estimators is achieved using a real data set. 相似文献
3.
In this paper, we consider using a local linear (LL) smoothing method to estimate a class of discontinuous regression functions. We establish the asymptotic normality of the integrated square error (ISE) of a LL-type estimator and show that the ISE has an asymptotic rate of convergence as good as for smooth functions, and the asymptotic rate of convergence of the ISE of the LL estimator is better than that of the Nadaraya-Watson (NW) and the Gasser-Miiller (GM) estimators. 相似文献
4.
By modifying the direct method to solve the overdetermined linear system we are able to present an algorithm for L1 estimation which appears to be superior computationally to any other known algorithm for the simple linear regression problem. 相似文献
5.
Ling Leng 《The American statistician》2020,74(3):226-232
AbstractErrors-in-variable (EIV) regression is often used to gauge linear relationship between two variables both suffering from measurement and other errors, such as, the comparison of two measurement platforms (e.g., RNA sequencing vs. microarray). Scientists are often at a loss as to which EIV regression model to use for there are infinite many choices. We provide sound guidelines toward viable solutions to this dilemma by introducing two general nonparametric EIV regression frameworks: the compound regression and the constrained regression. It is shown that these approaches are equivalent to each other and, to the general parametric structural modeling approach. The advantages of these methods lie in their intuitive geometric representations, their distribution free nature, and their ability to offer candidate solutions with various optimal properties when the ratio of the error variances is unknown. Each includes the classic nonparametric regression methods of ordinary least squares, geometric mean regression (GMR), and orthogonal regression as special cases. Under these general frameworks, one can readily uncover some surprising optimal properties of the GMR, and truly comprehend the benefit of data normalization. Supplementary materials for this article are available online. 相似文献
6.
《Journal of Statistical Computation and Simulation》2012,82(1):81-91
Inference for a generalized linear model is generally performed using asymptotic approximations for the bias and the covariance matrix of the parameter estimators. For small experiments, these approximations can be poor and result in estimators with considerable bias. We investigate the properties of designs for small experiments when the response is described by a simple logistic regression model and parameter estimators are to be obtained by the maximum penalized likelihood method of Firth [Firth, D., 1993, Bias reduction of maximum likelihood estimates. Biometrika, 80, 27–38]. Although this method achieves a reduction in bias, we illustrate that the remaining bias may be substantial for small experiments, and propose minimization of the integrated mean square error, based on Firth's estimates, as a suitable criterion for design selection. This approach is used to find locally optimal designs for two support points. 相似文献
7.
The purpose of this work is to develop statistical methods for using degradation measure to estimate a survival function for a linear degradation model. In this paper, we review existing methods and then describe a parametric approach. We focus on estimating the survival function. A simulation study is conducted to evaluate the performance of the estimating method and the method is illustrated using real data. 相似文献
8.
Li Chen 《统计学通讯:理论与方法》2013,42(20):5933-5945
AbstractIn this article, empirical likelihood is applied to the linear regression model with inequality constraints. We prove that asymptotic distribution of the adjusted empirical likelihood ratio test statistic is a weighted mixture of chi-square distribution. 相似文献
9.
A simple least squares method for estimating a change in mean of a sequence of independent random variables is studied. The method first tests for a change in mean based on the regression principle of constrained and unconstrained sums of squares. Conditionally on a decision by this test that a change has occurred, least squares estimates are used to estimate the change point, the initial mean level (prior to the change point) and the change itself. The estimates of the initial level and change are functions of the change point estimate. All estimates are shown to be consistent, and those for the initial level and change are shown to be asymptotically jointly normal. The method performs well for moderately large shifts (one standard deviation or more), but the estimates of the initial level and change are biased in a predictable way for small shifts. The large sample theory is helpful in understanding this problem. The asymptotic distribution of the change point estimator is obtained for local shifts in mean, but the case of non-local shifts appears analytically intractable. 相似文献
10.
John Haslett Dominic Dillane 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(1):131-143
Summary. 'Delete = replace' is a powerful and intuitive modelling identity. This paper extends previous work by stating and proving the identity in more general terms and extending its application to deletion diagnostics for estimates of variance components obtained by restricted maximum likelihood estimation for the linear mixed model. We present a new, fast, transparent and approximate computational procedure, arising as a by-product of the fitting process. We illustrate the effect of the deletion of individual observations, of 'subjects' and of arbitrary subsets. Central to the identity and its application is the conditional residual. 相似文献
11.
Innocent Ngaruye Joseph Nzabanita Dietrich von Rosen Martin Singull 《统计学通讯:理论与方法》2017,46(21):10835-10850
In this article, small area estimation under a multivariate linear model for repeated measures data is considered. The proposed model aims to get a model which borrows strength both across small areas and over time. The model accounts for repeated surveys, grouped response units, and random effects variations. Estimation of model parameters is discussed within a likelihood based approach. Prediction of random effects, small area means across time points, and per group units are derived. A parametric bootstrap method is proposed for estimating the mean squared error of the predicted small area means. Results are supported by a simulation study. 相似文献
12.
A step-stress model has received a considerable amount of attention in recent years. In the usual step-stress experiment, a stress level is allowed to increase at each step to get rapid failure of the experimental units. The expected lifetime of the experimental unit is shortened as the stress level increases. Although extensive amount of work has been done on step-stress models, not enough attention has been paid to analyze step-stress models incorporating this information. We consider a simple step-stress model and provide Bayesian inference of the unknown parameters under cumulative exposure model assumption. It is assumed that the lifetime of the experimental units are exponentially distributed with different scale parameters at different stress levels. It is further assumed that the stress level increases at each step, hence the expected lifetime decreases. We try to incorporate this restriction using the prior assumptions. It is observed that different censoring schemes can be incorporated very easily under a general setup. Monte Carlo simulations have been performed to see the effectiveness of the proposed method, and two datasets have been analyzed for illustrative purposes. 相似文献
13.
In this paper, we consider the simple step-stress model for a two-parameter exponential distribution, when both the parameters are unknown and the data are Type-II censored. It is assumed that under two different stress levels, the scale parameter only changes but the location parameter remains unchanged. It is observed that the maximum likelihood estimators do not always exist. We obtain the maximum likelihood estimates of the unknown parameters whenever they exist. We provide the exact conditional distributions of the maximum likelihood estimators of the scale parameters. Since the construction of the exact confidence intervals is very difficult from the conditional distributions, we propose to use the observed Fisher Information matrix for this purpose. We have suggested to use the bootstrap method for constructing confidence intervals. Bayes estimates and associated credible intervals are obtained using the importance sampling technique. Extensive simulations are performed to compare the performances of the different confidence and credible intervals in terms of their coverage percentages and average lengths. The performances of the bootstrap confidence intervals are quite satisfactory even for small sample sizes. 相似文献
14.
Exact confidence interval estimation for accelerated life regression models with censored smallest extreme value (or Weibull) data is often impractical. This paper evaluates the accuracy of approximate confidence intervals based on the asymptotic normality of the maximum likelihood estimator, the asymptotic X2distribution of the likelihood ratio statistic, mean and variance correction to the likelihood ratio statistic, and the so-called Bartlett correction to the likelihood ratio statistic. The Monte Carlo evaluations under various degrees of time censoring show that uncorrected likelihood ratio intervals are very accurate in situations with heavy censoring. The benefits of mean and variance correction to the likelihood ratio statistic are only realized with light or no censoring. Bartlett correction tends to result in conservative intervals. Intervals based on the asymptotic normality of maximum likelihood estimators are anticonservative and should be used with much caution. 相似文献
15.
16.
In reliability analysis, accelerated life-testing allows for gradual increment of stress levels on test units during an experiment. In a special class of accelerated life tests known as step-stress tests, the stress levels increase discretely at pre-fixed time points, and this allows the experimenter to obtain information on the parameters of the lifetime distributions more quickly than under normal operating conditions. Moreover, when a test unit fails, there are often more than one fatal cause for the failure, such as mechanical or electrical. In this article, we consider the simple step-stress model under Type-II censoring when the lifetime distributions of the different risk factors are independently exponentially distributed. Under this setup, we derive the maximum likelihood estimators (MLEs) of the unknown mean parameters of the different causes under the assumption of a cumulative exposure model. The exact distributions of the MLEs of the parameters are then derived through the use of conditional moment generating functions. Using these exact distributions as well as the asymptotic distributions and the parametric bootstrap method, we discuss the construction of confidence intervals for the parameters and assess their performance through Monte Carlo simulations. Finally, we illustrate the methods of inference discussed here with an example. 相似文献
17.
This paper considers alternative estimators of the intercept parameter of the linear regression model with normal error when
uncertain non-sample prior information about the value of the slope parameter is available. The maximum likelihood, restricted,
preliminary test and shrinkage estimators are considered. Based on their quadratic biases and mean square errors the relative
performances of the estimators are investigated. Both analytical and graphical comparisons are explored. None of the estimators
is found to be uniformly dominating the others. However, if the non-sample prior information regarding the value of the slope
is not too far from its true value, the shrinkage estimator of the intercept parameter dominates the rest of the estimators. 相似文献
18.
In reliability and life-testing experiments, the researcher is often interested in the effects of extreme or varying stress factors such as temperature, voltage and load on the lifetimes of experimental units. Step-stress test, which is a special class of accelerated life-tests, allows the experimenter to increase the stress levels at fixed times during the experiment in order to obtain information on the parameters of the life distributions more quickly than under normal operating conditions. In this paper, we consider a new step-stress model in which the life-testing experiment gets terminated either at a pre-fixed time (say, Tm+1) or at a random time ensuring at least a specified number of failures (say, r out of n). Under this model in which the data obtained are Type-II hybrid censored, we consider the case of exponential distribution for the underlying lifetimes. We then derive the maximum likelihood estimators (MLEs) of the parameters assuming a cumulative exposure model with lifetimes being exponentially distributed. The exact distributions of the MLEs of parameters are obtained through the use of conditional moment generating functions. We also derive confidence intervals for the parameters using these exact distributions, asymptotic distributions of the MLEs and the parametric bootstrap methods, and assess their performance through a Monte Carlo simulation study. Finally, we present two examples to illustrate all the methods of inference discussed here. 相似文献
19.
There are numerous situations in categorical data analysis where one wishes to test hypotheses involving a set of linear inequality constraints placed upon the cell probabilities. For example, it may be of interest to test for symmetry in k × k contingency tables against one-sided alternatives. In this case, the null hypothesis imposes a set of linear equalities on the cell probabilities (namely pij = Pji ×i > j), whereas the alternative specifies directional inequalities. Another important application (Robertson, Wright, and Dykstra 1988) is testing for or against stochastic ordering between the marginals of a k × k contingency table when the variables are ordinal and independence holds. Here we extend existing likelihood-ratio results to cover more general situations. To be specific, we consider testing Ht,0 against H1 - H0 and H1 against H2 - H 1 when H0:k × i=1 pixji = 0, j = 1,…, s, H1:k × i=1 pixji × 0, j = 1,…, s, and does not impose any restrictions on p. The xji's are known constants, and s × k - 1. We show that the asymptotic distributions of the likelihood-ratio tests are of chi-bar-square type, and provide expressions for the weighting values. 相似文献
20.
Thomas B. Fomby 《统计学通讯:模拟与计算》2013,42(2):551-570
Evidence presented by Fomby and Guilkey (1983) suggests that Hatanaka's estimator of the coefficients in the lagged dependent variable-serial correlation regression model performs poorly, not because of poor selection of the estimate of the autocorrelation coefficient, but because of the lack of a first observation correction. This study conducts a Monte Carlo investigationof the small sample efficiency gains obtainable from a first observation correction suggested by Harvey (1981). Results presented here indicate that substantial gains result from the first observation correction. However, in comparing Hatanaka's procedure with first observation correction to maximum likelihood search, it appears that ignoring the determinantal term of the full likelihood function causes some loss of small sample efficiency. Thus, when computer costsand programming constraints are not binding, maximum likelihood search is to be recommended. In contrast, users who have access to only rudimentary least squares programs would be well served when using Hatanaka's two-step procedure with first 相似文献