首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we consider James–Stein shrinkage and pretest estimation methods for time series following generalized linear models when it is conjectured that some of the regression parameters may be restricted to a subspace. Efficient estimation strategies are developed when there are many covariates in the model and some of them are not statistically significant. Statistical properties of the pretest and shrinkage estimation methods including asymptotic distributional bias and risk are developed. We investigate the relative performances of shrinkage and pretest estimators with respect to the unrestricted maximum partial likelihood estimator (MPLE). We show that the shrinkage estimators have a lower relative mean squared error as compared to the unrestricted MPLE when the number of significant covariates exceeds two. Monte Carlo simulation experiments were conducted for different combinations of inactive covariates and the performance of each estimator was evaluated in terms of its mean squared error. The practical benefits of the proposed methods are illustrated using two real data sets.  相似文献   

2.
In this note we consider estimation of a mixture model of count data which is composed of two discrete random variables. Conditional and unconditional estimation procedures are given for estimating the unknown parameter(s) of interest using the likelihood function. Asymptotic relative efficiencies are given to examine the amount of information loss in using the two estimation procedures. Specifically, we study the change in asymptotic relative efficiency, if any, in different parameter settings.  相似文献   

3.
The non-homogeneous Poisson process (NHPP) model is a very important class of software reliability models and is widely used in software reliability engineering. NHPPs are characterized by their intensity functions. In the literature it is usually assumed that the functional forms of the intensity functions are known and only some parameters in intensity functions are unknown. The parametric statistical methods can then be applied to estimate or to test the unknown reliability models. However, in realistic situations it is often the case that the functional form of the failure intensity is not very well known or is completely unknown. In this case we have to use functional (non-parametric) estimation methods. The non-parametric techniques do not require any preliminary assumption on the software models and then can reduce the parameter modeling bias. The existing non-parametric methods in the statistical methods are usually not applicable to software reliability data. In this paper we construct some non-parametric methods to estimate the failure intensity function of the NHPP model, taking the particularities of the software failure data into consideration.  相似文献   

4.
In this paper, we develop marginal analysis methods for longitudinal data under partially linear models. We employ the pretest and shrinkage estimation procedures to estimate the mean response parameters as well as the association parameters, which may be subject to certain restrictions. We provide the analytic expressions for the asymptotic biases and risks of the proposed estimators, and investigate their relative performance to the unrestricted semiparametric least-squares estimator (USLSE). We show that if the dimension of association parameters exceeds two, the risk of the shrinkage estimators is strictly less than that of the USLSE in most of the parameter space. On the other hand, the risk of the pretest estimator depends on the validity of the restrictions of association parameters. A simulation study is conducted to evaluate the performance of the proposed estimators relative to that of the USLSE. A real data example is applied to illustrate the practical usefulness of the proposed estimation procedures.  相似文献   

5.
Abstract.  The generalized autoregressive conditional heteroscedastic (GARCH) model has been popular in the analysis of financial time series data with high volatility. Conventionally, the parameter estimation in GARCH models has been performed based on the Gaussian quasi-maximum likelihood. However, when the innovation terms have either heavy-tailed or skewed distributions, the quasi-maximum likelihood estimator (QMLE) does not function well. In order to remedy this defect, we propose the normal mixture QMLE (NM-QMLE), which is obtained from the normal mixture quasi-likelihood, and demonstrate that the NM-QMLE is consistent and asymptotically normal. Finally, we present simulation results and a real data analysis in order to illustrate our findings.  相似文献   

6.
We consider a multicomponent load-sharing system in which the failure rate of a given component depends on the set of working components at any given time. Such systems can arise in software reliability models and in multivariate failure-time models in biostatistics, for example. A load-share rule dictates how stress or load is redistributed to the surviving components after a component fails within the system. In this paper, we assume the load share rule is unknown and derive methods for statistical inference on load-share parameters based on maximum likelihood. Components with (individual) constant failure rates are observed in two environments: (1) the system load is distributed evenly among the working components, and (2) we assume only the load for each working component increases when other components in the system fail. Tests for these special load-share models are investigated.  相似文献   

7.
This paper continues the study of the software reliability model of Fakhre-Zakeri & Slud (1995), an "exponential order statistic model" in the sense of Miller (1986) with general mixing distribution, imperfect debugging and large-sample asymptotics reflecting increase of the initial number of bugs with software size. The parameters of the model are θ (proportional to the initial number of bugs in the software), G (·, μ) (the mixing df, with finite dimensional unknown parameter μ, for the rates λ i with which the bugs in the software cause observable system failures), and p (the probability with which a detected bug is instantaneously replaced with another bug instead of being removed). Maximum likelihood estimation theory for (θ, p , μ) is applied to construct a likelihood-based score test for large sample data of the hypothesis of "perfect debugging" ( p = 0) vs "imperfect" ( p > 0) within the models studied. There are important models (including the Jelinski–Moranda) under which the score statistics with 1/√ n normalization are asymptotically degenerate. These statistics, illustrated on a software reliability data of Musa (1980), can serve nevertheless as important diagnostics for inadequacy of simple models  相似文献   

8.
The three-parameter Weibull distribution is widely used in life testing and reliability analysis. In this article, we propose an efficient method for the estimation of parameters and quantiles of the three-parameter Weibull distribution, which avoids the problem of unbounded likelihood, by using statistics invariant to unknown location. Through a Monte Carlo simulation study, we show that the proposed method performs well compared to other prominent methods based on bias and MSE. Finally, we present two illustrative examples.  相似文献   

9.
Data from complex surveys are being used increasingly to build the same sort of explanatory and predictive models as those used in the rest of statistics. Unfortunately the assumptions underlying standard statistical methods are not even approximately valid for most survey data. The problem of parameter estimation has been largely solved, at least for routine data analysis, through the use of weighted estimating equations, and software for most standard analytical procedures is now available in the major statistical packages. One notable omission from standard software is an analogue of the likelihood ratio test. An exception is the Rao–Scott test for loglinear models in contingency tables. In this paper we show how the Rao–Scott test can be extended to handle arbitrary regression models. We illustrate the process of fitting a model to survey data with an example from NHANES.  相似文献   

10.
In this paper, we consider the problem of estimation of semi-linear regression models. Using invariance arguments, Bhowmik and King [2007. Maximal invariant likelihood based testing of semi-linear models. Statist. Papers 48, 357–383] derived the probability density function of the maximal invariant statistic for the non-linear component of these models. Using this density function as a likelihood function allows us to estimate these models in a two-step process. First the non-linear component parameters are estimated by maximising the maximal invariant likelihood function. Then the non-linear component, with the parameter values replaced by estimates, is treated as a regressor and ordinary least squares is used to estimate the remaining parameters. We report the results of a simulation study conducted to compare the accuracy of this approach with full maximum likelihood and maximum profile-marginal likelihood estimation. We find maximising the maximal invariant likelihood function typically results in less biased and lower variance estimates than those from full maximum likelihood.  相似文献   

11.
Weibull distributions have received wide ranging applications in many areas including reliability, hydrology and communication systems. Many estimation methods have been proposed for Weibull distributions. But there has not been a comprehensive comparison of these estimation methods. Most studies have focused on comparing the maximum likelihood estimation (MLE) with one of the other approaches. In this paper, we first propose an L-moment estimator for the Weibull distribution. Then, a comprehensive comparison is made of the following methods: the method of maximum likelihood estimation (MLE), the method of logarithmic moments, the percentile method, the method of moments and the method of L-moments.  相似文献   

12.
The problems of estimation and hypotheses testing on the parameters of two correlated linear models are discussed. Such models are known to have direct applications in epidemiologic research, particularly in the field of family studies. When the data are unbalanced, the maximum-likelihood estimation of the parameters is achieved by adopting a fairly simple numerical algorithm. The asymptotic variances and covariances of the estimators are derived, and the procedures are illustrated on arterial-blood-pressure data from the literature.  相似文献   

13.
This paper deals with the problem of estimating all the unknown parameters of geometric fractional Brownian processes from discrete observations. The estimation procedure is built upon the marriage of the quadratic variation and the maximum likelihood approach. The asymptotic properties of the estimators are provided. Moveover, we compare our derived method with the approach proposed by Misiran et al. [Fractional Black-Scholes models: complete MLE with application to fractional option pricing. In International conference on optimization and control; Guiyang, China; 2010. p. 573–586.], namely the complete maximum likelihood estimation. Simulation studies confirm theoretical findings and illustrate that our methodology is efficient and reliable. To show how to apply our approach in realistic contexts, an empirical study of Chinese financial market is also presented.  相似文献   

14.
Abstract.  For a class of vector-valued non-Gaussian stationary processes, we develop the Cressie–Read power-divergence (CR) statistic approach which has been proposed for the i.i.d. case. The CR statistic includes empirical likelihood as a special case. Therefore, by adopting this CR statistic approach, the theory of estimation and testing based on empirical likelihood is greatly extended. We use an extended Whittle likelihood as score function and derive the asymptotic distribution of the CR statistic. We apply this result to estimation of autocorrelation and the AR coefficient, and get narrower confidence intervals than those obtained by existing methods. We also consider the power properties of the test based on asymptotic theory. Under a sequence of contiguous local alternatives, we derive the asymptotic distribution of the CR statistic. The problem of testing autocorrelation is discussed and we introduce some interesting properties of the local power.  相似文献   

15.
In this paper we consider structural measurement error models within the elliptical family of distributions. We consider dependent and independent el? liptical models, each of which requires special treatment methodology. We discuss in each case estimation and hypothesis testing using maximum likelihood theory. As shown, most of the developments obtained under normal theory carries through to the dependent case. In the independent case, emphasis is placed on the ^-distribution, an important member of the elliptical family. Correcting likelihood ratio statistics in both cases is also of major interest.  相似文献   

16.
The class of joint mean‐covariance models uses the modified Cholesky decomposition of the within subject covariance matrix in order to arrive to an unconstrained, statistically meaningful reparameterisation. The new parameterisation of the covariance matrix has two sets of parameters that separately describe the variances and correlations. Thus, with the mean or regression parameters, these models have three sets of distinct parameters. In order to alleviate the problem of inefficient estimation and downward bias in the variance estimates, inherent in the maximum likelihood estimation procedure, the usual REML estimation procedure adjusts for the degrees of freedom lost due to the estimation of the mean parameters. Because of the parameterisation of the joint mean covariance models, it is possible to adapt the usual REML procedure in order to estimate the variance (correlation) parameters by taking into account the degrees of freedom lost by the estimation of both the mean and correlation (variance) parameters. To this end, here we propose adjustments to the estimation procedures based on the modified and adjusted profile likelihoods. The methods are illustrated by an application to a real data set and simulation studies. The Canadian Journal of Statistics 40: 225–242; 2012 © 2012 Statistical Society of Canada  相似文献   

17.
In this paper, we study the class of inflated modified power series distributions (IMPSD) where inflation occurs at any of the support points. This class include among other the generalized Poisson, the generalized negative binomial, the generalized logarithmic series and the lost games distributions. We give expressions for the moments, factorial moments and central moments of the IMPSD. The maximum likelihood estimation of the parameters of the IMPSD and the variance – covariance matrix of the estimators is obtained. We derive these estimators and their information matrices for mentioned above particular members of IMPSD class. The second part of this paper deals with the distribution of sum of independent and identically distributed random variables taking values s, s+1. s + 2, …, s ≥ 0, with modified power series distributions inflated at the point s.  相似文献   

18.
The introduction of software to calculate maximum likelihood estimates for mixed linear models has made likelihood estimation a practical alternative to methods based on sums of squares. Likelihood based tests and confidence intervals, however, may be misleading in problems with small sample sizes. This paper discusses an adjusted version of the directed log-likelihood statistic for mixed models that is highly accurate for testing one parameter hypotheses. Indroduced by Skovgaard (1996, Journal of the Bernoulli Society,2,145-165), we show in mixed models that the statistic has a simple conpact from that may be obtained from standard software. Simulation studies indicate that this statistic is more accurate than many of the specialized procedure that have been advocated.  相似文献   

19.
Factor models, structural equation models (SEMs) and random-effect models share the common feature that they assume latent or unobserved random variables. Factor models and SEMs allow well developed procedures for a rich class of covariance models with many parameters, while random-effect models allow well developed procedures for non-normal models including heavy-tailed distributions for responses and random effects. In this paper, we show how these two developments can be combined to result in an extremely rich class of models, which can be beneficial to both areas. A new fitting procedures for binary factor models and a robust estimation approach for continuous factor models are proposed.  相似文献   

20.
In this paper, we introduce a new family of transmuted distributions, the cubic rank transmutation map distribution. This new proposal increases the flexibility of the transmuted distributions enabling the modelling of more complex data such as ones possessing bimodal hazard rates. In order to illustrate the usefulness of the cubic rank transmutation map, we use two well-known lifetime distributions, namely the Weibull and log-logistic models. Several mathematical properties of these new distributions, namely the cubic rank transmuted Weibull distribution and the cubic rank transmuted log-logistic distribution, are derived. Then, the maximum likelihood estimation of the model parameters is described. A simulation study designed to assess the properties of this estimation procedure is then carried out. Finally, applications of the proposed models and their fit are illustrated with some datasets and the corresponding diagnostic analyses are also provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号