首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Recently, Zhang [Simultaneous confidence intervals for several inverse Gaussian populations. Stat Probab Lett. 2014;92:125–131] proposed simultaneous pairwise confidence intervals (SPCIs) based on the fiducial generalized pivotal quantity concept to make inferences about the inverse Gaussian means under heteroscedasticity. In this paper, we propose three new methods for constructing SPCIs to make inferences on the means of several inverse Gaussian distributions when scale parameters and sample sizes are unequal. One of the methods results in a set of classic SPCIs (in the sense that it is not simulation-based inference) and the two others are based on a parametric bootstrap approach. The advantages of our proposed methods over Zhang’s (2014) method are: (i) the simulation results show that the coverage probability of the proposed parametric bootstrap approaches is fairly close to the nominal confidence coefficient while the coverage probability of Zhang’s method is smaller than the nominal confidence coefficient when the number of groups and the variance of groups are large and (ii) the proposed set of classic SPCIs is conservative in contrast to Zhang’s method.  相似文献   

2.
A bootstrap based method to construct 1−α simultaneous confidence intervals for relative effects in the one-way layout is presented. This procedure takes the stochastic correlation between the test statistics into account and results in narrower simultaneous confidence intervals than the application of the Bonferroni correction. Instead of using the bootstrap distribution of a maximum statistic, the coverage of the confidence intervals for the individual comparisons are adjusted iteratively until the overall confidence level is reached. Empirical coverage and power estimates of the introduced procedure for many-to-one comparisons are presented and compared with asymptotic procedures based on the multivariate normal distribution.  相似文献   

3.
The lognormal distribution is quite commonly used as a lifetime distribution. Data arising from life-testing and reliability studies are often left truncated and right censored. Here, the EM algorithm is used to estimate the parameters of the lognormal model based on left truncated and right censored data. The maximization step of the algorithm is carried out by two alternative methods, with one involving approximation using Taylor series expansion (leading to approximate maximum likelihood estimate) and the other based on the EM gradient algorithm (Lange, 1995). These two methods are compared based on Monte Carlo simulations. The Fisher scoring method for obtaining the maximum likelihood estimates shows a problem of convergence under this setup, except when the truncation percentage is small. The asymptotic variance-covariance matrix of the MLEs is derived by using the missing information principle (Louis, 1982), and then the asymptotic confidence intervals for scale and shape parameters are obtained and compared with corresponding bootstrap confidence intervals. Finally, some numerical examples are given to illustrate all the methods of inference developed here.  相似文献   

4.
Likelihood-ratio tests (LRTs) are often used for inferences on one or more logistic regression coefficients. Conventionally, for given parameters of interest, the nuisance parameters of the likelihood function are replaced by their maximum likelihood estimates. The new function created is called the profile likelihood function, and is used for inference from LRT. In small samples, LRT based on the profile likelihood does not follow χ2 distribution. Several corrections have been proposed to improve LRT when used with small-sample data. Additionally, complete or quasi-complete separation is a common geometric feature for small-sample binary data. In this article, for small-sample binary data, we have derived explicitly the correction factors of LRT for models with and without separation, and proposed an algorithm to construct confidence intervals. We have investigated the performances of different LRT corrections, and the corresponding confidence intervals through simulations. Based on the simulation results, we propose an empirical rule of thumb on the use of these methods. Our simulation findings are also supported by real-world data.  相似文献   

5.
In this article, the generalized linear model for longitudinal data is studied. A generalized empirical likelihood method is proposed by combining generalized estimating equations and quadratic inference functions based on the working correlation matrix. It is proved that the proposed generalized empirical likelihood ratios are asymptotically chi-squared under some suitable conditions, and hence it can be used to construct the confidence regions of the parameters. In addition, the maximum empirical likelihood estimates of parameters are obtained, and their asymptotic normalities are proved. Some simulations are undertaken to compare the generalized empirical likelihood and normal approximation-based method in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. An example of a real data is used for illustrating our methods.  相似文献   

6.
Asymptotic inference results for the coefficients of variation of normal populations are presented in this article. This includes formulas for test statistics, power, confidence intervals, and simultaneous inference. The results are based on the asymptotic normality of the sample coefficient of variation as derived by Miller (1991). An example which compares the homogeneity of bone test samples produced from two different methods is presented.  相似文献   

7.
Analysis of high dimensional data often seeks to identify a subset of important features and assess their effects on the outcome. Traditional statistical inference procedures based on standard regression methods often fail in the presence of high-dimensional features. In recent years, regularization methods have emerged as promising tools for analyzing high dimensional data. These methods simultaneously select important features and provide stable estimation of their effects. Adaptive LASSO and SCAD for instance, give consistent and asymptotically normal estimates with oracle properties. However, in finite samples, it remains difficult to obtain interval estimators for the regression parameters. In this paper, we propose perturbation resampling based procedures to approximate the distribution of a general class of penalized parameter estimates. Our proposal, justified by asymptotic theory, provides a simple way to estimate the covariance matrix and confidence regions. Through finite sample simulations, we verify the ability of this method to give accurate inference and compare it to other widely used standard deviation and confidence interval estimates. We also illustrate our proposals with a data set used to study the association of HIV drug resistance and a large number of genetic mutations.  相似文献   

8.
Abstract. In this article, a naive empirical likelihood ratio is constructed for a non‐parametric regression model with clustered data, by combining the empirical likelihood method and local polynomial fitting. The maximum empirical likelihood estimates for the regression functions and their derivatives are obtained. The asymptotic distributions for the proposed ratio and estimators are established. A bias‐corrected empirical likelihood approach to inference for the parameters of interest is developed, and the residual‐adjusted empirical log‐likelihood ratio is shown to be asymptotically chi‐squared. These results can be used to construct a class of approximate pointwise confidence intervals and simultaneous bands for the regression functions and their derivatives. Owing to our bias correction for the empirical likelihood ratio, the accuracy of the obtained confidence region is not only improved, but also a data‐driven algorithm can be used for selecting an optimal bandwidth to estimate the regression functions and their derivatives. A simulation study is conducted to compare the empirical likelihood method with the normal approximation‐based method in terms of coverage accuracies and average widths of the confidence intervals/bands. An application of this method is illustrated using a real data set.  相似文献   

9.
The authors explore likelihood‐based methods for making inferences about the components of variance in a general normal mixed linear model. In particular, they use local asymptotic approximations to construct confidence intervals for the components of variance when the components are close to the boundary of the parameter space. In the process, they explore the question of how to profile the restricted likelihood (REML). Also, they show that general REML estimates are less likely to fall on the boundary of the parameter space than maximum‐likelihood estimates and that the likelihood‐ratio test based on the local asymptotic approximation has higher power than the likelihood‐ratio test based on the usual chi‐squared approximation. They examine the finite‐sample properties of the proposed intervals by means of a simulation study.  相似文献   

10.
Importance resampling is an approach that uses exponential tilting to reduce the resampling necessary for the construction of nonparametric bootstrap confidence intervals. The properties of bootstrap importance confidence intervals are well established when the data is a smooth function of means and when there is no censoring. However, in the framework of survival or time-to-event data, the asymptotic properties of importance resampling have not been rigorously studied, mainly because of the unduly complicated theory incurred when data is censored. This paper uses extensive simulation to show that, for parameter estimates arising from fitting Cox proportional hazards models, importance bootstrap confidence intervals can be constructed if the importance resampling probabilities of the records for the n individuals in the study are determined by the empirical influence function for the parameter of interest. Our results show that, compared to uniform resampling, importance resampling improves the relative mean-squared-error (MSE) efficiency by a factor of nine (for n = 200). The efficiency increases significantly with sample size, is mildly associated with the amount of censoring, but decreases slightly as the number of bootstrap resamples increases. The extra CPU time requirement for calculating importance resamples is negligible when compared to the large improvement in MSE efficiency. The method is illustrated through an application to data on chronic lymphocytic leukemia, which highlights that the bootstrap confidence interval is the preferred alternative to large sample inferences when the distribution of a specific covariate deviates from normality. Our results imply that, because of its computational efficiency, importance resampling is recommended whenever bootstrap methodology is implemented in a survival framework. Its use is particularly important when complex covariates are involved or the survival problem to be solved is part of a larger problem; for instance, when determining confidence bounds for models linking survival time with clusters identified in gene expression microarray data.  相似文献   

11.
In many engineering problems it is necessary to draw statistical inferences on the mean of a lognormal distribution based on a complete sample of observations. Statistical demonstration of mean time to repair (MTTR) is one example. Although optimum confidence intervals and hypothesis tests for the lognormal mean have been developed, they are difficult to use, requiring extensive tables and/or a computer. In this paper, simplified conservative methods for calculating confidence intervals or hypothesis tests for the lognormal mean are presented. In this paper, “conservative” refers to confidence intervals (hypothesis tests) whose infimum coverage probability (supremum probability of rejecting the null hypothesis taken over parameter values under the null hypothesis) equals the nominal level. The term “conservative” has obvious implications to confidence intervals (they are “wider” in some sense than their optimum or exact counterparts). Applying the term “conservative” to hypothesis tests should not be confusing if it is remembered that this implies that their equivalent confidence intervals are conservative. No implication of optimality is intended for these conservative procedures. It is emphasized that these are direct statistical inference methods for the lognormal mean, as opposed to the already well-known methods for the parameters of the underlying normal distribution. The method currently employed in MIL-STD-471A for statistical demonstration of MTTR is analyzed and compared to the new method in terms of asymptotic relative efficiency. The new methods are also compared to the optimum methods derived by Land (1971, 1973).  相似文献   

12.
Stochastic gradient descent (SGD) provides a scalable way to compute parameter estimates in applications involving large‐scale data or streaming data. As an alternative version, averaged implicit SGD (AI‐SGD) has been shown to be more stable and more efficient. Although the asymptotic properties of AI‐SGD have been well established, statistical inferences based on it such as interval estimation remain unexplored. The bootstrap method is not computationally feasible because it requires to repeatedly resample from the entire data set. In addition, the plug‐in method is not applicable when there is no explicit covariance matrix formula. In this paper, we propose a scalable statistical inference procedure, which can be used for conducting inferences based on the AI‐SGD estimator. The proposed procedure updates the AI‐SGD estimate as well as many randomly perturbed AI‐SGD estimates, upon the arrival of each observation. We derive some large‐sample theoretical properties of the proposed procedure and examine its performance via simulation studies.  相似文献   

13.
Ruiqin Tian 《Statistics》2017,51(5):988-1005
In this paper, empirical likelihood inference for longitudinal data within the framework of partial linear regression models are investigated. The proposed procedures take into consideration the correlation within groups without involving direct estimation of nuisance parameters in the correlation matrix. The empirical likelihood method is used to estimate the regression coefficients and the baseline function, and to construct confidence intervals. A nonparametric version of Wilk's theorem for the limiting distribution of the empirical likelihood ratio is derived. Compared with methods based on normal approximations, the empirical likelihood does not require consistent estimators for the asymptotic variance and bias. The finite sample behaviour of the proposed method is evaluated with simulation and illustrated with an AIDS clinical trial data set.  相似文献   

14.
Exact confidence intervals for variances rely on normal distribution assumptions. Alternatively, large-sample confidence intervals for the variance can be attained if one estimates the kurtosis of the underlying distribution. The method used to estimate the kurtosis has a direct impact on the performance of the interval and thus the quality of statistical inferences. In this paper the author considers a number of kurtosis estimators combined with large-sample theory to construct approximate confidence intervals for the variance. In addition, a nonparametric bootstrap resampling procedure is used to build bootstrap confidence intervals for the variance. Simulated coverage probabilities using different confidence interval methods are computed for a variety of sample sizes and distributions. A modification to a conventional estimator of the kurtosis, in conjunction with adjustments to the mean and variance of the asymptotic distribution of a function of the sample variance, improves the resulting coverage values for leptokurtically distributed populations.  相似文献   

15.
Empirical Likelihood-based Inference in Linear Models with Missing Data   总被引:18,自引:0,他引:18  
The missing response problem in linear regression is studied. An adjusted empirical likelihood approach to inference on the mean of the response variable is developed. A non-parametric version of Wilks's theorem for the adjusted empirical likelihood is proved, and the corresponding empirical likelihood confidence interval for the mean is constructed. With auxiliary information, an empirical likelihood-based estimator with asymptotic normality is defined and an adjusted empirical log-likelihood function with asymptotic χ2 is derived. A simulation study is conducted to compare the adjusted empirical likelihood methods and the normal approximation methods in terms of coverage accuracies and average lengths of the confidence intervals. Based on biases and standard errors, a comparison is also made between the empirical likelihood-based estimator and related estimators by simulation. Our simulation indicates that the adjusted empirical likelihood methods perform competitively and the use of auxiliary information provides improved inferences.  相似文献   

16.
Abstract.  We consider large sample inference in a semiparametric logistic/proportional-hazards mixture model. This model has been proposed to model survival data where there exists a positive portion of subjects in the population who are not susceptible to the event under consideration. Previous studies of the logistic/proportional-hazards mixture model have focused on developing point estimation procedures for the unknown parameters. This paper studies large sample inferences based on the semiparametric maximum likelihood estimator. Specifically, we establish existence, consistency and asymptotic normality results for the semiparametric maximum likelihood estimator. We also derive consistent variance estimates for both the parametric and non-parametric components. The results provide a theoretical foundation for making large sample inference under the logistic/proportional-hazards mixture model.  相似文献   

17.
Comparative lifetime experiments are of great importance when the interest is in ascertaining the relative merits of two competing products with regard to their reliability. In this article, we consider two exponential populations and when joint progressive Type-II censoring is implemented on the two samples. We then derive the moment generating functions and the exact distributions of the maximum likelihood estimators (MLEs) of the mean lifetimes of the two exponential populations under such a joint progressive Type-II censoring. We then discuss the exact lower confidence bounds, exact confidence intervals, and simultaneous confidence regions. Next, we discuss the corresponding approximate results based on the asymptotic normality of the MLEs as well as those based on the Bayesian method. All these confidence intervals and regions are then compared by means of Monte Carlo simulations with those obtained from bootstrap methods. Finally, an illustrative example is presented in order to illustrate all the methods of inference discussed here.  相似文献   

18.
In applications using a linear regression model with a balanced two-fold nested error structure, interest focuses on inferences concerning variability of the effects associated with the levels of nesting. This article proposes confidence intervals on the variance components associated with the primary and secondary levels in the model. In order to construct the confidence intervals we use a modified large sample method, generalized inference method, and Satterthwaite approximation. Computer simulation is performed to compare the proposed confidence intervals. A numerical example is provided to demonstrate the intervals.  相似文献   

19.
Valid simultaneous confidence intervals based on rerandomization are provided for the first time. They are derived from joint confidence regions which are constructed by testing for all possible parametric values. A simple exampe illustrates these confidence intervals and compares inferences from them with other methods.  相似文献   

20.
Recently, exact inference under hybrid censoring scheme has attracted extensive attention in the field of reliability analysis. However, most of the authors neglect the possibility of competing risks model. This paper mainly discusses the exact likelihood inference for the analysis of generalized type-I hybrid censoring data with exponential competing failure model. Based on the maximum likelihood estimates for unknown parameters, we establish the exact conditional distribution of parameters by conditional moment generating function, and then obtain moment properties as well as exact confidence intervals (CIs) for parameters. Furthermore, approximate CIs are constructed by asymptotic distribution and bootstrap method as well. We also compare their performances with exact method through the use of Monte Carlo simulations. And finally, a real data set is analysed to illustrate the validity of all the methods developed here.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号