首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Inferences concerning exponential distributions are considered from a sampling theory viewpoint when the data are randomly right censored and the censored values are missing. Both one-sample and m-sample (m 2) problems are considered. Likelihood functions are obtained for situations in which the censoring mechanism is informative which leads to natural and intuitively appealing estimators of the unknown proportions of censored observations. For testing hypotheses about the unknown parameters, three well-known test statistics, namely, likelihood ratio test, score test, and Wald-type test are considered.  相似文献   

2.
One important type of question in statistical inference is how to interpret data as evidence. The law of likelihood provides a satisfactory answer in interpreting data as evidence for simple hypotheses, but remains silent for composite hypotheses. This article examines how the law of likelihood can be extended to composite hypotheses within the scope of the likelihood principle. From a system of axioms, we conclude that the strength of evidence for the composite hypotheses should be represented by an interval between lower and upper profiles likelihoods. This article is intended to reveal the connection between profile likelihoods and the law of likelihood under the likelihood principle rather than argue in favor of the use of profile likelihoods in addressing general questions of statistical inference. The interpretation of the result is also discussed.  相似文献   

3.
The Maximum Likelihood (ML) and Best Linear Unbiased (BLU) estimators of the location and scale parameters of an extreme value distribution (Lawless [1982]) are compared under conditions of small sample sizes and Type I censorship. The comparisons were made in terms of the mean square error criterion. According to this criterion, the ML estimator of σ in the case of very small sample sizes (n < 10) and heavy censorship (low censoring time) proved to be more efficient than the corresponding BLU estimator. However, the BLU estimator for σ attains parity with the corresponding ML estimator when the censoring time increases even for sample sizes as low as 10. The BLU estimator of σ attains equivalence with the ML estimator when the sample size increases above 10, particularly when the censoring time is also increased. The situation is reversed when it came to estimating the location parameter μ, as the BLU estimator was found to be consistently more efficient than the ML estimator despite the improved performance of the ML estimator when the sample size increases. However, computational ease and convenience favor the ML estimators.  相似文献   

4.
In this article, an EM algorithm approach to obtain the maximum likelihood estimates of parameters for analyzing bivariate skew normal data with non monotone missing values is presented. A simulation study is implemented to investigate the performance of the presented algorithm. Results of an application are also reported where a Bootstrap approach is used to find the variances of the parameter estimates.  相似文献   

5.
We derive an identity for nonparametric maximum likelihood estimators (NPMLE) and regularized MLEs in censored data models which expresses the standardized maximum likelihood estimator in terms of the standardized empirical process. This identity provides an effective starting point in proving both consistency and efficiency of NPMLE and regularized MLE. The identity and corresponding method for proving efficiency is illustrated for the NPMLE in the univariate right-censored data model, the regularized MLE in the current status data model and for an implicit NPMLE based on a mixture of right-censored and current status data. Furthermore, a general algorithm for estimation of the limiting variance of the NPMLE is provided. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

6.
Abstract. We introduce fully non‐parametric two‐sample tests for testing the null hypothesis that the samples come from the same distribution if the values are only indirectly given via current status censoring. The tests are based on the likelihood ratio principle and allow the observation distributions to be different for the two samples, in contrast with earlier proposals for this situation. A bootstrap method is given for determining critical values and asymptotic theory is developed. A simulation study, using Weibull distributions, is presented to compare the power behaviour of the tests with the power of other non‐parametric tests in this situation.  相似文献   

7.
This paper discusses regression analysis of current status or case I interval‐censored failure time data arising from the additive hazards model. In this situation, some covariates could be missing because of various reasons, but there may exist some auxiliary information about the missing covariates. To address the problem, we propose an estimated partial likelihood approach for estimation of regression parameters, which makes use of the available auxiliary information. The method can be easily implemented, and the asymptotic properties of the resulting estimates are established. To assess the finite sample performance of the proposed method, an extensive simulation study is conducted and indicates that the method works well.  相似文献   

8.
Subgroup detection has received increasing attention recently in different fields such as clinical trials, public management and market segmentation analysis. In these fields, people often face time‐to‐event data, which are commonly subject to right censoring. This paper proposes a semiparametric Logistic‐Cox mixture model for subgroup analysis when the interested outcome is event time with right censoring. The proposed method mainly consists of a likelihood ratio‐based testing procedure for testing the existence of subgroups. The expectation–maximization iteration is applied to improve the testing power, and a model‐based bootstrap approach is developed to implement the testing procedure. When there exist subgroups, one can also use the proposed model to estimate the subgroup effect and construct predictive scores for the subgroup membership. The large sample properties of the proposed method are studied. The finite sample performance of the proposed method is assessed by simulation studies. A real data example is also provided for illustration.  相似文献   

9.
Recently, least absolute deviations (LAD) estimator for median regression models with doubly censored data was proposed and the asymptotic normality of the estimator was established. However, it is invalid to make inference on the regression parameter vectors, because the asymptotic covariance matrices are difficult to estimate reliably since they involve conditional densities of error terms. In this article, three methods, which are based on bootstrap, random weighting, and empirical likelihood, respectively, and do not require density estimation, are proposed for making inference for the doubly censored median regression models. Simulations are also done to assess the performance of the proposed methods.  相似文献   

10.
Doubly censored failure time data occur in many areas including demographical studies, epidemiology studies, medical studies and tumorigenicity experiments, and correspondingly some inference procedures have been developed in the literature (Biometrika, 91, 2004, 277; Comput. Statist. Data Anal., 57, 2013, 41; J. Comput. Graph. Statist., 13, 2004, 123). In this paper, we discuss regression analysis of such data under a class of flexible semiparametric transformation models, which includes some commonly used models for doubly censored data as special cases. For inference, the non‐parametric maximum likelihood estimation will be developed and in particular, we will present a novel expectation–maximization algorithm with the use of subject‐specific independent Poisson variables. In addition, the asymptotic properties of the proposed estimators are established and an extensive simulation study suggests that the proposed methodology works well for practical situations. The method is applied to an AIDS study.  相似文献   

11.
Abstract

In many industrial and biological experiments, the recorded data consist of the number of observations falling in an interval. In this paper, we develop two test statistics to test whether the grouped observations come from an exponential distribution. Following the procedure of Damianou and Kemp (Damianou, C., Kemp, A. W. (1990 Damianou, C. and Kemp, A. W. 1990. New goodness of statistics for discrete and continuous data. American Journal of Mathematical and Management Sciences, 10: 275307. [Taylor & Francis Online] [Google Scholar]). New goodness of statistics for discrete and continuous data. American Journal of Mathematical and Management Sciences 10:275–307.), Kolmogrov–Smirnov type statistics are developed with the maximum likelihood estimator of the scale parameter substituted for the true unknown scale. The asymptotic theory for both the statistics is studied and power studies carried out via simulations.  相似文献   

12.
The currently existing estimation methods and goodness-of-fit tests for the Cox model mainly deal with right censored data, but they do not have direct extension to other complicated types of censored data, such as doubly censored data, interval censored data, partly interval-censored data, bivariate right censored data, etc. In this article, we apply the empirical likelihood approach to the Cox model with complete sample, derive the semiparametric maximum likelihood estimators (SPMLE) for the Cox regression parameter and the baseline distribution function, and establish the asymptotic consistency of the SPMLE. Via the functional plug-in method, these results are extended in a unified approach to doubly censored data, partly interval-censored data, and bivariate data under univariate or bivariate right censoring. For these types of censored data mentioned, the estimation procedures developed here naturally lead to Kolmogorov-Smirnov goodness-of-fit tests for the Cox model. Some simulation results are presented.  相似文献   

13.
14.
For left-truncated and right-censored data, the technique proposed by Brookmeyer and Crowley (1982) is extended to construct a point-wise confidence interval for median residual lifetime. This procedure is computationally simpler than the score type confidence interval in Jeong et al. (2008) and empirical likelihood ratio confidence interval in Zhou and Jeong (2011). Further, transformations of the estimator are applied to improve the approximation to the asymptotic distribution for small sample sizes. A simulation study is conducted to investigate the accuracy of these confidence intervals and the implementation of these confidence intervals to two real datasets is illustrated.  相似文献   

15.
We consider m×mm×m covariance matrices, Σ1Σ1 and Σ2Σ2, which satisfy Σ2-Σ1Σ2-Σ1=Δ, where ΔΔ has a specified rank. Maximum likelihood estimators of Σ1Σ1 and Σ2Σ2 are obtained when sample covariance matrices having Wishart distributions are available and rank(Δ)rank(Δ) is known. The likelihood ratio statistic for a test about the value of rank(Δ)rank(Δ) is also given and some properties of its null distribution are obtained. The methods developed in this paper are illustrated through an example.  相似文献   

16.
Manufacturers often apply process capability indices in the quality control. This study constructs statistical analysis methods of assessing the lifetime performance index of Gompertz products under progressively type II right censored samples. The maximum likelihood estimator of the index is inferred by data transformation and then utilized to develop a hypothesis testing procedure and a confidence interval to assess product performance. We also give one example and some Monte Carlo simulations to assess the behavior of the testing procedure and confidence interval. The results show that our proposed method can effectively evaluate whether the lifetime of products meet the requirement.  相似文献   

17.
In this paper, we propose an estimation method when sample data are incomplete. We decompose the likelihood according to missing patterns and combine the estimators based on each likelihood weighting by the Fisher information ratio. This approach provides a simple way of estimating parameters, especially for non‐monotone missing data. Numerical examples are presented to illustrate this method.  相似文献   

18.
In this paper, we consider the problem of estimating the location and scale parameters of an extreme value distribution based on multiply Type-II censored samples. We first describe the best linear unbiased estimators and the maximum likelihood estimators of these parameters. After observing that the best linear unbiased estimators need the construction of some tables for its coefficients and that the maximum likelihood estimators do not exist in an explicit algebraic form and hence need to be found by numerical methods, we develop approximate maximum likelihood estimators by appropriately approximating the likelihood equations. In addition to being simple explicit estimators, these estimators turn out to be nearly as efficient as the best linear unbiased estimators and the maximum likelihood estimators. Next, we derive the asymptotic variances and covariance of these estimators in terms of the first two single moments and the product moments of order statistics from the standard extreme value distribution. Finally, we present an example in order to illustrate all the methods of estimation of parameters discussed in this paper.  相似文献   

19.
Abstract

The problem of testing equality of two multivariate normal covariance matrices is considered. Assuming that the incomplete data are of monotone pattern, a quantity similar to the Likelihood Ratio Test Statistic is proposed. A satisfactory approximation to the distribution of the quantity is derived. Hypothesis testing based on the approximate distribution is outlined. The merits of the test are investigated using Monte Carlo simulation. Monte Carlo studies indicate that the test is very satisfactory even for moderately small samples. The proposed methods are illustrated using an example.  相似文献   

20.
Construction of closed-form confidence intervals on linear combinations of variance components were developed generically for balanced data and studied mainly for one-way and two-way random effects analysis of variance models. The Satterthwaite approach is easily generalized to unbalanced data and modified to increase its coverage probability. They are applied on measures of assay precision in combination with (restricted) maximum likelihood and Henderson III Type 1 and 3 estimation. Simulations of interlaboratory studies with unbalanced data and with small sample sizes do not show superiority of any of the possible combinations of estimation methods and Satterthwaite approaches on three measures of assay precision. However, the modified Satterthwaite approach with Henderson III Type 3 estimation is often preferred above the other combinations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号