首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

In this article, we consider a simple step-stress life test in the presence of exponentially distributed competing risks. It is assumed that the stress is changed when a pre-specified number of failures takes place. The data is assumed to be Type-II censored. We obtain the maximum likelihood estimators of the model parameters and the exact conditional distributions of the maximum likelihood estimators. Based on the conditional distribution, approximate confidence intervals (CIs) of unknown parameters have been constructed. Percentile bootstrap CIs of model parameters are also provided. Optimal test plan is addressed. We perform an extensive simulation study to observe the behaviour of the proposed method. The performances are quite satisfactory. Finally we analyse two data sets for illustrative purposes.  相似文献   

2.
Inference based on the Central Limit Theorem has only first order accuracy. We give tests and confidence intervals (CIs) of second orderaccuracy for the shape parameter ρ of a gamma distribution for both the unscaled and scaled cases.

Tests and CIs based on moment and cumulant estimates are considered as well as those based on the maximum likelihood estimate (MLE).

For the unscaled case the MLE is the moment estimate of order zero; the most efficient moment estimate of integral order is the sample mean, having asymptotic relative efficiency (ARE) .61 when ρ= 1.

For the scaled case the most efficient moment estimate is a functionof the mean and variance. Its ARE is .39 when ρ = 1.

Our motivation for constructing these tests of ρ = 1 and CIs forρ is to provide a simple and convenient method for testing whether a distribution is exponential in situations such as rainfall models where such an assumption is commonly made.  相似文献   

3.
ABSTRACT

In a regression model with a random individual and a random time effect explicit representations of the nonnegative quadratic minimum biased estimators of the corresponding variances are deduced. These estimators always exist and are unique. Moreover, under normality assumption of the dependent variable unbiased estimators of the mean squared errors of the variance estimates are derived. Finally, confidence intervals on the variance components are considered.  相似文献   

4.
In many clinical trials and epidemiological studies, comparing the mean count response of an exposed group to a control group is often of interest. This type of data is often over-dispersed with respect to Poisson variation, and previous studies usually compared groups using confidence intervals (CIs) of the difference between the two means. However, in some situations, especially when the means are small, interval estimation of the mean ratio (MR) is preferable. Moreover, Cox and Lewis [4 D.R. Cox and P.A.W. Lewis, The Statistical Analysis of Series of Events, Methuen, London, 1966.[Crossref] [Google Scholar]] pointed out many other situations where the MR is more relevant than the difference of means. In this paper, we consider CI construction for the ratio of means between two treatments for over-dispersed Poisson data. We develop several CIs for the situation by hybridizing two separate CIs for two individual means. Extensive simulations show that all hybrid-based CIs perform reasonably well in terms of coverage. However, the CIs based on the delta method using the logarithmic transformation perform better than other intervals in the sense that they have slightly shorter interval lengths and show better balance of tail errors. These proposed CIs are illustrated with three real data examples.  相似文献   

5.

This article presents methods for constructing confidence intervals for the median of a finite population under simple random sampling without replacement, stratified random sampling, and cluster sampling. The confidence intervals, as well as point estimates and test statistics, are derived from sign estimating functions which are based on the well-known sign test. Therefore, a unified approach for inference about the median of a finite population is given.  相似文献   

6.
ABSTRACT

This paper presents methods for constructing prediction limits for a step-stress model in accelerated life testing. An exponential life distribution with a mean that is a log-linear function of stress, and a cumulative exposure model are assumed. Two prediction problems are discussed. One concerns the prediction of the life at a design stress, and the other concerns the prediction of a future life during the step-stress testing. Both predictions require the knowledge of some model parameters. When estimates for the model parameters are available, a calibration method based on simulations is proposed for correcting the prediction intervals (regions) obtained by treating the parameter estimates as the true parameter values. Finally, a numerical example is given to illustrate the prediction procedure.  相似文献   

7.
The importance of the dispersion parameter in counts occurring in toxicology, biology, clinical medicine, epidemiology, and other similar studies is well known. A couple of procedures for the construction of confidence intervals (CIs) of the dispersion parameter have been investigated, but little attention has been paid to the accuracy of its CIs. In this paper, we introduce the profile likelihood (PL) approach and the hybrid profile variance (HPV) approach for constructing the CIs of the dispersion parameter for counts based on the negative binomial model. The non-parametric bootstrap (NPB) approach based on the maximum likelihood (ML) estimates of the dispersion parameter is also considered. We then compare our proposed approaches with an asymptotic approach based on the ML and the restricted ML (REML) estimates of the dispersion parameter as well as the parametric bootstrap (PB) approach based on the ML estimates of the dispersion parameter. As assessed by Monte Carlo simulations, the PL approach has the best small-sample performance, followed by the REML, HPV, NPB, and PB approaches. Three examples to biological count data are presented.  相似文献   

8.
A linear errors-in-variables (EIV) model that contains measurement errors in the input and output data is considered. Weakly dependent (α- and ?-mixing) errors, not necessarily stationary nor identically distributed, are taken into account within the EIV model. Parameters of the EIV model are estimated by the total least squares approach, which provides highly non linear estimates. Because of this, many statistical procedures for constructing confidence intervals and testing hypotheses cannot be applied. One possible solution to this dilemma is a block bootstrap. An appropriate moving block bootstrap procedure is provided and its correctness proved. The results are illustrated through a simulation study and applied on real data as well.  相似文献   

9.
ABSTRACT

We consider point and interval estimation of the unknown parameters of a generalized inverted exponential distribution in the presence of hybrid censoring. The maximum likelihood estimates are obtained using EM algorithm. We then compute Fisher information matrix using the missing value principle. Bayes estimates are derived under squared error and general entropy loss functions. Furthermore, approximate Bayes estimates are obtained using Tierney and Kadane method as well as using importance sampling approach. Asymptotic and highest posterior density intervals are also constructed. Proposed estimates are compared numerically using Monte Carlo simulations and a real data set is analyzed for illustrative purposes.  相似文献   

10.
Abstract

Recently, the study of the lifetime of systems in reliability and survival analysis in the presence of several causes of failure (competing risks) has attracted attention in the literature. In this paper, series and parallel systems with exponential lifetime for each item of the system are considered. Several causes of failure independently affect lifetime distributions and observations of failure times of the systems are considered under progressive Type-II censored scheme. For series systems, the maximum likelihood estimates of parameters are computed and confidence intervals for parameters of the model are obtained using Fisher information matrix. For parallel systems, the generalized EM algorithm which uses the Newton-Raphson algorithm inside the EM algorithm is used to compute the maximum likelihood estimates of parameters. Also, the standard errors of the maximum likelihood estimates are computed by using the supplemented EM algorithm. The simulation study confirms the good performance of the introduced approach.  相似文献   

11.
Abstract

Inferential methods based on ranks present robust and powerful alternative methodology for testing and estimation. In this article, two objectives are followed. First, develop a general method of simultaneous confidence intervals based on the rank estimates of the parameters of a general linear model and derive the asymptotic distribution of the pivotal quantity. Second, extend the method to high dimensional data such as gene expression data for which the usual large sample approximation does not apply. It is common in practice to use the asymptotic distribution to make inference for small samples. The empirical investigation in this article shows that for methods based on the rank-estimates, this approach does not produce a viable inference and should be avoided. A method based on the bootstrap is outlined and it is shown to provide a reliable and accurate method of constructing simultaneous confidence intervals based on rank estimates. In particular it is shown that commonly applied methods of normal or t-approximation are not satisfactory, particularly for large-scale inferences. Methods based on ranks are uniquely suitable for analysis of microarray gene expression data since they often involve large scale inferences based on small samples containing a large number of outliers and violate the assumption of normality. A real microarray data is analyzed using the rank-estimate simultaneous confidence intervals. Viability of the proposed method is assessed through a Monte Carlo simulation study under varied assumptions.  相似文献   

12.
ABSTRACT

In many real-world applications, the traditional theory of analysis of covariance (ANCOVA) leads to inadequate and unreliable results because of violation of the response variable observations from the essential Gaussian assumption that may be due to the heterogeneity of population, the presence of outlier or both of them. In this paper, we develop a Gaussian mixture ANCOVA model for modelling heterogeneous populations with a finite number of subpopulation. We provide the maximum likelihood estimates of the model parameters via an EM algorithm. We also drive the adjusted effects estimators for treatments and covariates. The Fisher information matrix of the model and asymptotic confidence intervals for the parameter are also discussed. We performed a simulation study to assess the performance of the proposed model. A real-world example is also worked out to explained the methodology.  相似文献   

13.
ABSTRACT

This paper presents a modified skew-normal (SN) model that contains the normal model as a special case. Unlike the usual SN model, the Fisher information matrix of the proposed model is always non-singular. Despite of this desirable property for the regular asymptotic inference, as with the SN model, in the considered model the divergence of the maximum likelihood estimator (MLE) of the skewness parameter may occur with positive probability in samples with moderate sizes. As a solution to this problem, a modified score function is used for the estimation of the skewness parameter. It is proved that the modified MLE is always finite. The quasi-likelihood approach is considered to build confidence intervals. When the model includes location and scale parameters, the proposed method is combined with the unmodified maximum likelihood estimates of these parameters.  相似文献   

14.
Accelerated life testing is widely used in product life testing experiments since it provides significant reduction in time and cost of testing. In this paper, assuming that the lifetime of items under use condition follow the two-parameter Pareto distribution of the second kind, partially accelerated life tests based on progressively Type-II censored samples are considered. The likelihood equations of the model parameters and the acceleration factor are reduced to a single nonlinear equation to be solved numerically to obtain the maximum-likelihood estimates (MLEs). Based on normal approximation to the asymptotic distribution of MLEs, the approximate confidence intervals (ACIs) for the parameters are derived. Two bootstrap CIs are also proposed. The classical Bayes estimates cannot be obtained in explicit form, so we propose to apply Markov chain Monte Carlo method to tackle this problem, which allows us to construct the credible interval of the involved parameters. Analysis of a simulated data set has also been presented for illustrative purposes. Finally, a Monte Carlo simulation study is carried out to investigate the precision of the Bayes estimates with MLEs and to compare the performance of different corresponding CIs considered.  相似文献   

15.
Abstract

Linear mixed effects models have been popular in small area estimation problems for modeling survey data when the sample size in one or more areas is too small for reliable inference. However, when the data are restricted to a bounded interval, the linear model may be inappropriate, particularly if the data are near the boundary. Nonlinear sampling models are becoming increasingly popular for small area estimation problems when the normal model is inadequate. This paper studies the use of a beta distribution as an alternative to the normal distribution as a sampling model for survey estimates of proportions which take values in (0, 1). Inference for small area proportions based on the posterior distribution of a beta regression model ensures that point estimates and credible intervals take values in (0, 1). Properties of a hierarchical Bayesian small area model with a beta sampling distribution and logistic link function are presented and compared to those of the linear mixed effect model. Propriety of the posterior distribution using certain noninformative priors is shown, and behavior of the posterior mean as a function of the sampling variance and the model variance is described. An example using 2010 Small Area Income and Poverty Estimates (SAIPE) data is given, and a numerical example studying small sample properties of the model is presented.  相似文献   

16.
Abstract.  The likelihood ratio statistic for testing pointwise hypotheses about the survival time distribution in the current status model can be inverted to yield confidence intervals (CIs). One advantage of this procedure is that CIs can be formed without estimating the unknown parameters that figure in the asymptotic distribution of the maximum likelihood estimator (MLE) of the distribution function. We discuss the likelihood ratio-based CIs for the distribution function and the quantile function and compare these intervals to several different intervals based on the MLE. The quantiles of the limiting distribution of the MLE are estimated using various methods including parametric fitting, kernel smoothing and subsampling techniques. Comparisons are carried out both for simulated data and on a data set involving time to immunization against rubella. The comparisons indicate that the likelihood ratio-based intervals are preferable from several perspectives.  相似文献   

17.
This paper concerns maximum likelihood estimation for the semiparametric shared gamma frailty model; that is the Cox proportional hazards model with the hazard function multiplied by a gamma random variable with mean 1 and variance θ. A hybrid ML-EM algorithm is applied to 26 400 simulated samples of 400 to 8000 observations with Weibull hazards. The hybrid algorithm is much faster than the standard EM algorithm, faster than standard direct maximum likelihood (ML, Newton Raphson) for large samples, and gives almost identical results to the penalised likelihood method in S-PLUS 2000. When the true value θ0 of θ is zero, the estimates of θ are asymptotically distributed as a 50–50 mixture between a point mass at zero and a normal random variable on the positive axis. When θ0 > 0, the asymptotic distribution is normal. However, for small samples, simulations suggest that the estimates of θ are approximately distributed as an x ? (100 ? x)% mixture, 0 ≤ x ≤ 50, between a point mass at zero and a normal random variable on the positive axis even for θ0 > 0. In light of this, p-values and confidence intervals need to be adjusted accordingly. We indicate an approximate method for carrying out the adjustment.  相似文献   

18.
ABSTRACT

In this article, we consider a two-phase tandem queueing model with a second optional service and random feedback. The first phase of service is essential for all customers and after the completion of the first phase of service, any customer receives the second phase of service with probability α, feedback to the tail of the first queue with probability β if the service is not successful and leaves the system with probability 1 ? α ? β. In this model, our main purpose is to estimate the parameters of the model, traffic intensity, and mean system size, in the steady state, via maximum likelihood and Bayesian methods. Furthermore, we find asymptotic confidence intervals for mean system size. Finally, by a simulation study, we compute the confidence levels and mean length for asymptotic confidence intervals of mean system size with a nominal level 0.95.  相似文献   

19.
ABSTRACT

ARMA–GARCH models are widely used to model the conditional mean and conditional variance dynamics of returns on risky assets. Empirical results suggest heavy-tailed innovations with positive extreme value index for these models. Hence, one may use extreme value theory to estimate extreme quantiles of residuals. Using weak convergence of the weighted sequential tail empirical process of the residuals, we derive the limiting distribution of extreme conditional Value-at-Risk (CVaR) and conditional expected shortfall (CES) estimates for a wide range of extreme value index estimators. To construct confidence intervals, we propose to use self-normalization. This leads to improved coverage vis-à-vis the normal approximation, while delivering slightly wider confidence intervals. A data-driven choice of the number of upper order statistics in the estimation is suggested and shown to work well in simulations. An application to stock index returns documents the improvements of CVaR and CES forecasts.  相似文献   

20.
Abstract

Experiments in various countries with “last week” and “last month” reference periods for reporting of households’ food consumption have generally found that “week”-based estimates are higher. In India the National Sample Survey (NSS) has consistently found that “week”-based estimates are higher than month-based estimates for a majority of food item groups. But why are week-based estimates higher than month-based estimates? It has long been believed that the reason must be recall lapse, inherent in a long reporting period such as a month. But is household consumption of a habitually consumed item “recalled” in the same way as that of an item of infrequent consumption? And why doesn’t memory lapse cause over-reporting (over-assessment) as often as under-reporting? In this paper, we provide an alternative hypothesis, involving a “quantity floor effect” in reporting behavior, under which “week” may cause over-reporting for many items. We design a test to detect the effect postulated by this hypothesis and carry it out on NSS 68th round HCES data. The test results strongly suggest that our hypothesis provides a better explanation of the difference between week-based and month-based estimates than the recall lapse theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号