首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The nonparametric component in a partially linear model is estimated by a linear combination of fixed-knot cubic B-splines with a second-order difference penalty on the adjacent B-spline coefficients. The resulting penalized least-squares estimator is used to construct two Wald-type spline-based test statistics for the null hypothesis of the linearity of the nonparametric function. When the number of knots is fixed, the first test statistic asymptotically has the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom, under the null hypothesis. The smoothing parameter is determined by specifying a value for the asymptotically expected value of the test statistic under the null hypothesis. When the number of knots is fixed and under the null hypothesis, the second test statistic asymptotically has a chi-squared distribution with K=q+2 degrees of freedom, where q is the number of knots used for estimation. The power performances of the two proposed tests are investigated via simulation experiments, and the practicality of the proposed methodology is illustrated using a real-life data set.  相似文献   

2.
Often in practice one is interested in the situation where the lifetime data are censored. Censorship is a common phenomenon frequently encountered when analyzing lifetime data due to time constraints. In this paper, the flexible Weibull distribution proposed in Bebbington et al. [A flexible Weibull extension, Reliab. Eng. Syst. Safety 92 (2007), pp. 719–726] is studied using maximum likelihood technics based on three different algorithms: Newton Raphson, Levenberg Marquardt and Trust Region reflective. The proposed parameter estimation method is introduced and proved to work from theoretical and practical point of view. On one hand, we apply a maximum likelihood estimation method using complete simulated and real data. On the other hand, we study for the first time the model using simulated and real data for type I censored samples. The estimation results are approved by a statistical test.  相似文献   

3.
M-estimation (robust estimation) for the parameters in nonlinear mixed effects models using Fisher scoring method is investigated in the article, which shares some of the features of the existing maximum likelihood estimation: consistency and asymptotic normality. Score tests for autocorrelation and random effects based on M-estimation, together with their asymptotic distribution are also studied. The performance of the test statistics are evaluated via simulations and a real data analysis of plasma concentrations data.  相似文献   

4.
In this paper, a two-parameter discrete distribution named Misclassified Size Biased Discrete Lindley distribution is defined under the situation of misclassification where some of the observations corresponding to x = c + 1 are reported as x = c with misclassification errorα. Different estimation methods like maximum likelihood estimation, moment estimation, and Bayes Estimation are considered to estimate the parameters of Misclassified Size Biased Discrete Lindley distribution. These methods are compared by using mean square error through simulation study with varying sample sizes. Further general form of factorial moment is also obtained for Misclassified Size Biased Discrete Lindley distribution. Real life data set is used to fit Misclassified Size Biased Discrete Lindley distribution.  相似文献   

5.
Lin  Tsung I.  Lee  Jack C.  Ni  Huey F. 《Statistics and Computing》2004,14(2):119-130
A finite mixture model using the multivariate t distribution has been shown as a robust extension of normal mixtures. In this paper, we present a Bayesian approach for inference about parameters of t-mixture models. The specifications of prior distributions are weakly informative to avoid causing nonintegrable posterior distributions. We present two efficient EM-type algorithms for computing the joint posterior mode with the observed data and an incomplete future vector as the sample. Markov chain Monte Carlo sampling schemes are also developed to obtain the target posterior distribution of parameters. The advantages of Bayesian approach over the maximum likelihood method are demonstrated via a set of real data.  相似文献   

6.
In this work, we develop a method of adaptive non‐parametric estimation, based on ‘warped’ kernels. The aim is to estimate a real‐valued function s from a sample of random couples (X,Y). We deal with transformed data (Φ(X),Y), with Φ a one‐to‐one function, to build a collection of kernel estimators. The data‐driven bandwidth selection is performed with a method inspired by Goldenshluger and Lepski (Ann. Statist., 39, 2011, 1608). The method permits to handle various problems such as additive and multiplicative regression, conditional density estimation, hazard rate estimation based on randomly right‐censored data, and cumulative distribution function estimation from current‐status data. The interest is threefold. First, the squared‐bias/variance trade‐off is automatically realized. Next, non‐asymptotic risk bounds are derived. Lastly, the estimator is easily computed, thanks to its simple expression: a short simulation study is presented.  相似文献   

7.
Linear mixed models are widely used when multiple correlated measurements are made on each unit of interest. In many applications, the units may form several distinct clusters, and such heterogeneity can be more appropriately modelled by a finite mixture linear mixed model. The classical estimation approach, in which both the random effects and the error parts are assumed to follow normal distribution, is sensitive to outliers, and failure to accommodate outliers may greatly jeopardize the model estimation and inference. We propose a new mixture linear mixed model using multivariate t distribution. For each mixture component, we assume the response and the random effects jointly follow a multivariate t distribution, to conveniently robustify the estimation procedure. An efficient expectation conditional maximization algorithm is developed for conducting maximum likelihood estimation. The degrees of freedom parameters of the t distributions are chosen data adaptively, for achieving flexible trade-off between estimation robustness and efficiency. Simulation studies and an application on analysing lung growth longitudinal data showcase the efficacy of the proposed approach.  相似文献   

8.
In this paper, we introduce a new risk measure, the so‐called conditional tail moment. It is defined as the moment of order a ≥ 0 of the loss distribution above the upper α‐quantile where α ∈ (0,1). Estimating the conditional tail moment permits us to estimate all risk measures based on conditional moments such as conditional tail expectation, conditional value at risk or conditional tail variance. Here, we focus on the estimation of these risk measures in case of extreme losses (where α ↓0 is no longer fixed). It is moreover assumed that the loss distribution is heavy tailed and depends on a covariate. The estimation method thus combines non‐parametric kernel methods with extreme‐value statistics. The asymptotic distribution of the estimators is established, and their finite‐sample behaviour is illustrated both on simulated data and on a real data set of daily rainfalls.  相似文献   

9.
In this article, we discuss the parameter estimation for a k-factor generalized long-memory process with conditionally heteroskedastic noise. Two estimation methods are proposed. The first method is based on the conditional distribution of the process and the second is obtained as an extension of Whittle's estimation approach. For comparison purposes, Monte Carlo simulations are used to evaluate the finite sample performance of these estimation techniques, using four different conditional distribution functions.  相似文献   

10.
In some statistical applications, data may not be considered as a random sample of the whole population and some subjects have less probability of belonging to the sample. Consequently, statistical inferences for such data sets, usually yields biased estimation. In such situations, the length-biased version of the original random variable as a special weighted distribution often produces better inferences. An alternative weighted distribution based on the mean residual life is suggested to treat the biasedness. The Rayleigh distribution is applied in many real applications, hence the proposed method of weighting is performed to produce a new lifetime distribution based on the Rayleigh model. In addition, statistical properties of the proposed distribution is investigated. A simulation study and a real data set are prepared to illustrate that the mean residual weighted Rayleigh distribution gives a better fit than the original and also the length-biased Rayleigh distribution.  相似文献   

11.
In this paper we consider the inferential aspect of the nonparametric estimation of a conditional function , where X t,m represents the vector containing the m conditioning lagged values of the series. Here is an arbitrary measurable function. The local polynomial estimator of order p is used for the estimation of the function g, and of its partial derivatives up to a total order p. We consider α-mixing processes, and we propose the use of a particular resampling method, the local polynomial bootstrap, for the approximation of the sampling distribution of the estimator. After analyzing the consistency of the proposed method, we present a simulation study which gives evidence of its finite sample behaviour.  相似文献   

12.
The author considers estimation under a Gamma process model for degradation data. The setting for degradation data is one in which n independent units, each with a Gamma process with a common shape function and scale parameter, are observed at several possibly different times. Covariates can be incorporated into the model by taking the scale parameter as a function of the covariates. The author proposes using the maximum pseudo‐likelihood method to estimate the unknown parameters. The method requires usage of the Pool Adjacent Violators Algorithm. Asymptotic properties, including consistency, convergence rate and asymptotic distribution, are established. Simulation studies are conducted to validate the method and its application is illustrated by using bridge beams data and carbon‐film resistors data. The Canadian Journal of Statistics 37: 102‐118; 2009 © 2009 Statistical Society of Canada  相似文献   

13.
A finite mixture model using the Student's t distribution has been recognized as a robust extension of normal mixtures. Recently, a mixture of skew normal distributions has been found to be effective in the treatment of heterogeneous data involving asymmetric behaviors across subclasses. In this article, we propose a robust mixture framework based on the skew t distribution to efficiently deal with heavy-tailedness, extra skewness and multimodality in a wide range of settings. Statistical mixture modeling based on normal, Student's t and skew normal distributions can be viewed as special cases of the skew t mixture model. We present analytically simple EM-type algorithms for iteratively computing maximum likelihood estimates. The proposed methodology is illustrated by analyzing a real data example.  相似文献   

14.
The traditional classification is based on the assumption that distribution of indicator variable X in one class is homogeneous. However, when data in one class comes from heterogeneous distribution, the likelihood ratio of two classes is not unique. In this paper, we construct the classification via an ambiguity criterion for the case of distribution heterogeneity of X in a single class. The separated historical data in each situation are used to estimate the thresholds respectively. The final boundary is chosen as the maximum and minimum thresholds from all situations. Our approach obtains the minimum ambiguity with a high classification accuracy allowing for a precise decision. In addition, nonparametric estimation of the classification region and theoretical properties are derived. Simulation study and real data analysis are reported to demonstrate the effectiveness of our method.  相似文献   

15.
In this paper, we consider a statistical estimation problem known as atomic deconvolution. Introduced in reliability, this model has a direct application when considering biological data produced by flow cytometers. From a statistical point of view, we aim at inferring the percentage of cells expressing the selected molecule and the probability distribution function associated with its fluorescence emission. We propose here an adaptive estimation procedure based on a previous deconvolution procedure introduced by Es, Gugushvili, and Spreij [(2008), ‘Deconvolution for an atomic distribution’, Electronic Journal of Statistics, 2, 265–297] and Gugushvili, Es, and Spreij [(2011), ‘Deconvolution for an atomic distribution: rates of convergence’, Journal of Nonparametric Statistics, 23, 1003–1029]. For both estimating the mixing parameter and the mixing density automatically, we use the Lepskii method based on the optimal choice of a bandwidth using a bias-variance decomposition. We then derive some convergence rates that are shown to be minimax optimal (up to some log terms) in Sobolev classes. Finally, we apply our algorithm on the simulated and real biological data.  相似文献   

16.
A new five-parameter distribution called the beta Weibull-geometric (BWG) distribution is proposed. The new distribution is generated from the logit of a beta random variable and includes the Weibull-geometric distribution of Barreto-Souza et al. [The Weibull-geometric distribution, J. Stat. Comput. Simul. 81 (2011), pp. 645–657], beta Weibull (BW), beta exponential, exponentiated Weibull, and some other lifetime distributions as special cases. A comprehensive mathematical treatment of this distribution is provided. The density function can be expressed as an infinite mixture of BW densities and then we derive some mathematical properties of the new distribution from the corresponding properties of the BW distribution. The density function of the order statistics and also estimation of the stress–strength parameter are obtained using two general expressions. To estimate the model parameters, we use the maximum likelihood method and the asymptotic distribution of the estimators is also discussed. The capacity of the new distribution are examined by various tools, using two real data sets.  相似文献   

17.
Abstract

The use of indices as an estimation tool of process capability is long-established among the statistical quality professionals. Numerous capability indices have been proposed in last few years. Cpm constitutes one of the most widely used capability indices and its estimation has attracted much interest. In this paper, we propose a new method for constructing an approximate confidence interval for the index Cpm. The proposed method is based on the asymptotic distribution of the index Cpm obtained by the Delta Method. Under some regularity conditions, the distribution of an estimator of the process capability index Cpm is asymptotically normal.  相似文献   

18.
A novel approach to quantile estimation in multivariate linear regression models with change-points is proposed: the change-point detection and the model estimation are both performed automatically, by adopting either the quantile-fused penalty or the adaptive version of the quantile-fused penalty. These two methods combine the idea of the check function used for the quantile estimation and the L1 penalization principle known from the signal processing and, unlike some standard approaches, the presented methods go beyond typical assumptions usually required for the model errors, such as sub-Gaussian or normal distribution. They can effectively handle heavy-tailed random error distributions, and, in general, they offer a more complex view on the data as one can obtain any conditional quantile of the target distribution, not just the conditional mean. The consistency of detection is proved and proper convergence rates for the parameter estimates are derived. The empirical performance is investigated via an extensive comparative simulation study and practical utilization is demonstrated using a real data example.  相似文献   

19.
Finite mixture models arise in a natural way in that they are modeling unobserved population heterogeneity. It is assumed that the population consists of an unknown number k of subpopulations with parameters λ1, ..., λk receiving weights p1, ..., pk. Because of the irregularity of the parameter space, the log-likelihood-ratio statistic (LRS) does not have a (χ2) limit distribution and therefore it is difficult to use the LRS to test for the number of components. These problems are circumvented by using the nonparametric bootstrap such that the mixture algorithm is applied B times to bootstrap samples obtained from the original sample with replacement. The number of components k is obtained as the mode of the bootstrap distribution of k. This approach is presented using the Times newspaper data and investigated in a simulation study for mixtures of Poisson data.  相似文献   

20.
When the data contain outliers or come from population with heavy-tailed distributions, which appear very often in spatiotemporal data, the estimation methods based on least-squares (L2) method will not perform well. More robust estimation methods are required. In this article, we propose the local linear estimation for spatiotemporal models based on least absolute deviation (L1) and drive the asymptotic distributions of the L1-estimators under some mild conditions imposed on the spatiotemporal process. The simulation results for two examples, with outliers and heavy-tailed distribution, respectively, show that the L1-estimators perform better than the L2-estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号