首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 901 毫秒
1.
The two parameter Gamma distribution is widely used for modeling lifetime distributions in reliability theory. There is much literature on the inference on the individual parameters of the Gamma distribution, namely the shape parameter k and the scale parameter θ when the other parameter is known. However, usually the reliability professionals have a major interest in making statistical inference about the mean lifetime μ, which equals the product θk for the Gamma distribution. The problem of inference on the mean μ when both parameters θ and k are unknown has been less attended in the literature for the Gamma distribution. In this paper we review the existing methods for interval estimation of μ. A comparative study in this paper indicates that the existing methods are either too approximate and yield less reliable confidence intervals or are computationally quite complicated and need advanced computing facilities. We propose a new simple method for interval estimation of the Gamma mean and compare its performance with the existing methods. The comparative study showed that the newly proposed computationally simple optimum power normal approximation method works best even for small sample sizes.  相似文献   

2.
Maximum-likelihood estimation is interpreted as a procedure for generating approximate pivotal quantities, that is, functions u(X;θ) of the data X and parameter θ that have distributions not involving θ. Further, these pivotals should be efficient in the sense of reproducing approximately the likelihood function of θ based on X, and they should be approximately linear in θ. To this end the effect of replacing θ by a parameter ϕ = ϕ(θ) is examined. The relationship of maximum-likelihood estimation interpreted in this way to conditional inference is discussed. Examples illustrating this use of maximum-likelihood estimation on small samples are given.  相似文献   

3.
We consider testing inference in inflated beta regressions subject to model misspecification. In particular, quasi-z tests based on sandwich covariance matrix estimators are described and their finite sample behavior is investigated via Monte Carlo simulations. The numerical evidence shows that quasi-z testing inference can be considerably more accurate than inference made through the usual z tests, especially when there is model misspecification. Interval estimation is also considered. We also present an empirical application that uses real (not simulated) data.  相似文献   

4.
An important practical issue of applying heavy tailed distributions is how to choose the sample fraction or threshold, since only a fraction of upper order statistics can be employed in the inference. Recently, Guillou & Hall ( 2001 ; Journal of Royal Statistical Society B, 63, 293–305) proposed a simple way to choose the threshold in estimating a tail index. In this article, the author first gives an intuitive explanation of the approach in Guillou & Hall ( 2001 ; it Journal of Royal Statistical Society B, 63, 293–305) and then proposes an alternative method, which can be extended to other settings like extreme value index estimation and tail dependence function estimation. Further the author proposes to combine this method for selecting a threshold with a bias reduction estimator to improve the performance of the tail index estimation, interval estimation of a tail index, and high quantile estimation. Simulation studies on both point estimation and interval estimation for a tail index show that both selection procedures are comparable and bias reduction estimation with the threshold selected by either method is preferred. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

5.
Modelling of HIV dynamics in AIDS research has greatly improved our understanding of the pathogenesis of HIV-1 infection and guided for the treatment of AIDS patients and evaluation of antiretroviral therapies. Some of the model parameters may have practical meanings with prior knowledge available, but others might not have prior knowledge. Incorporating priors can improve the statistical inference. Although there have been extensive Bayesian and frequentist estimation methods for the viral dynamic models, little work has been done on making simultaneous inference about the Bayesian and frequentist parameters. In this article, we propose a hybrid Bayesian inference approach for viral dynamic nonlinear mixed-effects models using the Bayesian frequentist hybrid theory developed in Yuan [Bayesian frequentist hybrid inference, Ann. Statist. 37 (2009), pp. 2458–2501]. Compared with frequentist inference in a real example and two simulation examples, the hybrid Bayesian approach is able to improve the inference accuracy without compromising the computational load.  相似文献   

6.
Statistical inference based on ranked set sampling has primarily been motivated by nonparametric problems. However, the sampling procedure can provide an improved estimator of the population mean when the population is partially known. In this article, we consider estimation of the population mean and variance for the location-scale families of distributions. We derive and compare different unbiased estimators of these parameters based on rindependent replications of a ranked set sample of size n.Large sample properties, along with asymptotic relative efficiencies, help identify which estimators are best suited for different location-scale distributions.  相似文献   

7.
Linear mixed models are widely used when multiple correlated measurements are made on each unit of interest. In many applications, the units may form several distinct clusters, and such heterogeneity can be more appropriately modelled by a finite mixture linear mixed model. The classical estimation approach, in which both the random effects and the error parts are assumed to follow normal distribution, is sensitive to outliers, and failure to accommodate outliers may greatly jeopardize the model estimation and inference. We propose a new mixture linear mixed model using multivariate t distribution. For each mixture component, we assume the response and the random effects jointly follow a multivariate t distribution, to conveniently robustify the estimation procedure. An efficient expectation conditional maximization algorithm is developed for conducting maximum likelihood estimation. The degrees of freedom parameters of the t distributions are chosen data adaptively, for achieving flexible trade-off between estimation robustness and efficiency. Simulation studies and an application on analysing lung growth longitudinal data showcase the efficacy of the proposed approach.  相似文献   

8.
The maximum likelihood (ML) method is used to estimate the unknown Gamma regression (GR) coefficients. In the presence of multicollinearity, the variance of the ML method becomes overstated and the inference based on the ML method may not be trustworthy. To combat multicollinearity, the Liu estimator has been used. In this estimator, estimation of the Liu parameter d is an important problem. A few estimation methods are available in the literature for estimating such a parameter. This study has considered some of these methods and also proposed some new methods for estimation of the d. The Monte Carlo simulation study has been conducted to assess the performance of the proposed methods where the mean squared error (MSE) is considered as a performance criterion. Based on the Monte Carlo simulation and application results, it is shown that the Liu estimator is always superior to the ML and recommendation about which best Liu parameter should be used in the Liu estimator for the GR model is given.  相似文献   

9.
The additive hazards model is one of the most commonly used regression models in the analysis of failure time data and many methods have been developed for its inference in various situations. However, no established estimation procedure exists when there are covariates with missing values and the observed responses are interval-censored; both types of complications arise in various settings including demographic, epidemiological, financial, medical and sociological studies. To address this deficiency, we propose several inverse probability weight-based and reweighting-based estimation procedures for the situation where covariate values are missing at random. The resulting estimators of regression model parameters are shown to be consistent and asymptotically normal. The numerical results that we report from a simulation study suggest that the proposed methods work well in practical situations. An application to a childhood cancer survival study is provided. The Canadian Journal of Statistics 48: 499–517; 2020 © 2020 Statistical Society of Canada  相似文献   

10.
This article focuses on simulation-based inference for the time-deformation models directed by a duration process. In order to better capture the heavy tail property of the time series of financial asset returns, the innovation of the observation equation is subsequently assumed to have a Student-t distribution. Suitable Markov chain Monte Carlo (MCMC) algorithms, which are hybrids of Gibbs and slice samplers, are proposed for estimation of the parameters of these models. In the algorithms, the parameters of the models can be sampled either directly from known distributions or through an efficient slice sampler. The states are simulated one at a time by using a Metropolis-Hastings method, where the proposal distributions are sampled through a slice sampler. Simulation studies conducted in this article suggest that our extended models and accompanying MCMC algorithms work well in terms of parameter estimation and volatility forecast.  相似文献   

11.
Abstract

This paper focuses on inference based on the confidence distributions of the nonparametric regression function and its derivatives, in which dependent inferences are combined by obtaining information about their dependency structure. We first give a motivating example in production operation system to illustrate the necessity of the problems studied in this paper in practical applications. A goodness-of-fit test for polynomial regression model is proposed on the basis of the idea of combined confidence distribution inference, which is the Fisher’s combination statistic in some cases. On the basis of this testing results, a combined estimator for the p-order derivative of nonparametric regression function is provided as well as its large sample size properties. Consequently, the performances of the proposed test and estimation method are illustrated by three specific examples. Finally, the motivating example is analyzed in detail. The simulated and real data examples illustrate the good performance and practicability of the proposed methods based on confidence distribution.  相似文献   

12.
Statistical inference of genetic regulatory networks is essential for understanding temporal interactions of regulatory elements inside the cells. In this work, we propose to infer the parameters of the ordinary differential equations using the techniques from functional data analysis (FDA) by regarding the observed time course expression data as continuous-time curves. For networks with a large number of genes, we take advantage of the sparsity of the networks by penalizing the linear coefficients with a L 1 norm. The ability of the algorithm to infer network structure is demonstrated using the cell-cycle time course data for Saccharomyces cerevisiae.  相似文献   

13.
To compare two samples under Type I censorship, this article proposes a method of semiparametric inference for the two-sample location-scale problem when the model for two samples is characterized by an unknown distribution and two unknown parameters. Simultaneous estimators for both the location shift and scale change parameters are given. It is shown that the two estimators are strongly consistent and asymptotically normal. The approach in this article can also be used for scale-shape models. Monte Carlo studies indicate that the proposed estimation procedure performs well in finite and heavily censored samples, maintains high relative efficiencies for a wide range of censoring proportions and is robust to the model misspecification, and also outperforms other competitive estimators.  相似文献   

14.
The equivalence of some tests of hypothesis and confidence limits is well known. When, however, the confidence limits are computed only after rejection of a null hypothesis, the usual unconditional confidence limits are no longer valid. This refers to a strict two-stage inference procedure: first test the hypothesis of interest and if the test results in a rejection decision, then proceed with estimating the relevant parameter. Under such a situation, confidence limits should be computed conditionally on the specified outcome of the test under which estimation proceeds. Conditional confidence sets will be longer than unconditional confidence sets and may even contain values of the parameter previously rejected by the test of hypothesis. Conditional confidence limits for the mean of a normal population with known variance are used to illustrate these results. In many applications, these results indicate that conditional estimation is probably not good practice.  相似文献   

15.
The penalized spline is a popular method for function estimation when the assumption of “smoothness” is valid. In this paper, methods for estimation and inference are proposed using penalized splines under additional constraints of shape, such as monotonicity or convexity. The constrained penalized spline estimator is shown to have the same convergence rates as the corresponding unconstrained penalized spline, although in practice the squared error loss is typically smaller for the constrained versions. The penalty parameter may be chosen with generalized cross‐validation, which also provides a method for determining if the shape restrictions hold. The method is not a formal hypothesis test, but is shown to have nice large‐sample properties, and simulations show that it compares well with existing tests for monotonicity. Extensions to the partial linear model, the generalized regression model, and the varying coefficient model are given, and examples demonstrate the utility of the methods. The Canadian Journal of Statistics 40: 190–206; 2012 © 2012 Statistical Society of Canada  相似文献   

16.
Summary In the literature on encompassing [see e.g. Mizon-Richard (1986), Hendry-Richard (1990), Florens-Hendry-Richard (1987)] there is a basic contradiction: on the one hand it is said that it is not possible to assume that the true distribution belongs to one of two competing modelM 1 andM 2, but, on the other hand, this assumption is made in the study of encompassing tests. In this paper we first propose a formal definition of encompassing, we then briefly examine the properties of this notion and we propose encompassing tests which do not assume that the true distribution belongs toM 1 orM 2; these tests are based on simulations. Finally, generalizing an idea used in the definition of an encompassing test (the GET test) we propose a new kind of inference, called indirect inference, which allows for estimation and test procedures when the model is too complicated to be treated by usual methods (for instance maximum likelihood methods); the only assumption made on the model is that it can be simulated, which seems to be a minimal requirement. This new class of inference methods can be used in a large number of domains and some examples are given. The present paper is based on Gouriéroux-Monfort (1992), and Gouriéroux-Monfort-Renault (1993), respectively GM and GMR hereafter. Invited paper at the Conference on ?Statistical Tests: Methodology and Econometric Applications?, held in Bologna, Italy, 27–28 May 1993.  相似文献   

17.
The last decade saw enormous progress in the development of causal inference tools to account for noncompliance in randomized clinical trials. With survival outcomes, structural accelerated failure time (SAFT) models enable causal estimation of effects of observed treatments without making direct assumptions on the compliance selection mechanism. The traditional proportional hazards model has however rarely been used for causal inference. The estimator proposed by Loeys and Goetghebeur (2003, Biometrics vol. 59 pp. 100–105) is limited to the setting of all or nothing exposure. In this paper, we propose an estimation procedure for more general causal proportional hazards models linking the distribution of potential treatment-free survival times to the distribution of observed survival times via observed (time-constant) exposures. Specifically, we first build models for observed exposure-specific survival times. Next, using the proposed causal proportional hazards model, the exposure-specific survival distributions are backtransformed to their treatment-free counterparts, to obtain – after proper mixing – the unconditional treatment-free survival distribution. Estimation of the parameter(s) in the causal model is then based on minimizing a test statistic for equality in backtransformed survival distributions between randomized arms.  相似文献   

18.
We propose a varying‐coefficient autoregressive model that contains additive models, varying‐ coefficient models, partially linear models and low‐dimensional interaction models as special cases. A global kernel backfitting method is proposed for the estimation and inference of parameters and unknown functions in this model. Key large‐sample results are established, including estimation consistency, asymptotic normality and the generalized likelihood ratio test for parameters and non‐parametric functions. The proposed methodology is examined by simulation studies and applied to examine the relationship between suicide news reports in the three leading newspapers and the daily number of suicides in Taiwan. The relationship between the media reporting and suicide incidence has been established and explored. The Canadian Journal of Statistics 47: 487–519; 2019 © 2019 Statistical Society of Canada  相似文献   

19.
Much effort has been devoted to deriving Edgeworth expansions for various classes of statistics that are asymptotically normally distributed, with derivations tailored to the individual structure of each class. Expansions with smaller error rates are needed for more accurate statistical inference. Two such Edgeworth expansions are derived analytically in this paper. One is a two-term expansion for the standardized U-statistic of order m, m ? 3, with an error rate o(n? 1). The other is an expansion with the same error rate for the distribution of the standardized V-statistic of the same order. In deriving the Edgeworth expansion, we made use of the close connection between the V- and U-statistics, which permits to first derive the needed expansion for the related U-statistic, then extend it to the V-statistic, taking into consideration the estimation of all difference terms between the two statistics.  相似文献   

20.
The paper deals with the problem of determining asymptotically pointwise optimal and asymptotically optimal stopping times in the Bayesian inference. The sufficient conditions are given for a family of stopping times to be asymptotically pointwise optimal and asymptotically optimal with respect to a continuous time process. As an example a sequential estimation of the intensity of the Poisson process is considered. Under a gamma prior distribution, an asymptotically pointwise optimal and asymptotically optimal rule is given using a LINEX loss function and the cost c per unit time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号