首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When two‐component parallel systems are tested, the data consist of Type‐II censored data X(i), i= 1, n, from one component, and their concomitants Y [i] randomly censored at X(r), the stopping time of the experiment. Marshall & Olkin's (1967) bivariate exponential distribution is used to illustrate statistical inference procedures developed for this data type. Although this data type is motivated practically, the likelihood is complicated, and maximum likelihood estimation is difficult, especially in the case where the parameter space is a non‐open set. An iterative algorithm is proposed for finding maximum likelihood estimates. This article derives several properties of the maximum likelihood estimator (MLE) including existence, uniqueness, strong consistency and asymptotic distribution. It also develops an alternative estimation method with closed‐form expressions based on marginal distributions, and derives its asymptotic properties. Compared with variances of the MLEs in the finite and large sample situations, the alternative estimator performs very well, especially when the correlation between X and Y is small.  相似文献   

2.
In his discussion of Cox’s (1972) paper on proportional hazards regression, Breslow (1972) provided the maximum likelihood estimator for the cumulative baseline hazard function. This estimator is commonly used in practice. The estimator has also been highly valuable in the further development of Cox regression and semiparametric inference with censored data. The present paper describes the Breslow estimator and its tremendous impact on the theory and practice of survival analysis.  相似文献   

3.
The currently existing estimation methods and goodness-of-fit tests for the Cox model mainly deal with right censored data, but they do not have direct extension to other complicated types of censored data, such as doubly censored data, interval censored data, partly interval-censored data, bivariate right censored data, etc. In this article, we apply the empirical likelihood approach to the Cox model with complete sample, derive the semiparametric maximum likelihood estimators (SPMLE) for the Cox regression parameter and the baseline distribution function, and establish the asymptotic consistency of the SPMLE. Via the functional plug-in method, these results are extended in a unified approach to doubly censored data, partly interval-censored data, and bivariate data under univariate or bivariate right censoring. For these types of censored data mentioned, the estimation procedures developed here naturally lead to Kolmogorov-Smirnov goodness-of-fit tests for the Cox model. Some simulation results are presented.  相似文献   

4.
In statistical analysis, particularly in econometrics, it is usual to consider regression models where the dependent variable is censored (limited). In particular, a censoring scheme to the left of zero is considered here. In this article, an extension of the classical normal censored model is developed by considering independent disturbances with identical Student-t distribution. In the context of maximum likelihood estimation, an expression for the expected information matrix is provided, and an efficient EM-type algorithm for the estimation of the model parameters is developed. In order to know what type of variables affect the income of housewives, the results and methods are applied to a real data set. A brief review on the normal censored regression model or Tobit model is also presented.  相似文献   

5.
Parametric models for interval censored data can now easily be fitted with minimal programming in certain standard statistical software packages. Regression equations can be introduced, both for the location and for the dispersion parameters. Finite mixture models can also be fitted, with a point mass on right (or left) censored observations, to allow for individuals who cannot have the event (or already have it). This mixing probability can also be allowed to follow a regression equation.Here, models based on nine different distributions are compared for three examples of heavily censored data as well as a set of simulated data. We find that, for parametric models, interval censoring can often be ignored and that the density, at centres of intervals, can be used instead in the likelihood function, although the approximation is not always reliable. In the context of heavily interval censored data, the conclusions from parametric models are remarkably robust with changing distributional assumptions and generally more informative than the corresponding non-parametric models.  相似文献   

6.
A hybrid censoring is a mixture of Type-I and Type-II censoring schemes. This article presents the statistical inferences on Weibull parameters when the data are hybrid censored. The maximum likelihood estimators (MLEs) and the approximate maximum likelihood estimators are developed for estimating the unknown parameters. Asymptotic distributions of the MLEs are used to construct approximate confidence intervals. Bayes estimates and the corresponding highest posterior density credible intervals of the unknown parameters are obtained under suitable priors on the unknown parameters and using the Gibbs sampling procedure. The method of obtaining the optimum censoring scheme based on the maximum information measure is also developed. Monte Carlo simulations are performed to compare the performances of the different methods and one data set is analyzed for illustrative purposes.  相似文献   

7.
Nonlinear mixed-effects (NLME) models are flexible enough to handle repeated-measures data from various disciplines. In this article, we propose both maximum-likelihood and restricted maximum-likelihood estimations of NLME models using first-order conditional expansion (FOCE) and the expectation–maximization (EM) algorithm. The FOCE-EM algorithm implemented in the ForStat procedure SNLME is compared with the Lindstrom and Bates (LB) algorithm implemented in both the SAS macro NLINMIX and the S-Plus/R function nlme in terms of computational efficiency and statistical properties. Two realworld data sets an orange tree data set and a Chinese fir (Cunninghamia lanceolata) data set, and a simulated data set were used for evaluation. FOCE-EM converged for all mixed models derived from the base model in the two realworld cases, while LB did not, especially for the models in which random effects are simultaneously considered in several parameters to account for between-subject variation. However, both algorithms had identical estimated parameters and fit statistics for the converged models. We therefore recommend using FOCE-EM in NLME models, particularly when convergence is a concern in model selection.  相似文献   

8.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

9.
Two multiple comparisons procedures for determining which of K arbitrarily censored populations differ from each other are proposed. The procedures are based on multiple comparisons using the generalized Wilcoxon and log-rank statistics. The procedures incorporate a pairwise ranking scheme, rather than the joint ranking scheme proposed by Breslow (1970) and Crowley and Thomas (1975). A conservative testing method suggested by an inequality due to ?idák (1967) is given; a numerical example is presented.  相似文献   

10.
Often in practice one is interested in the situation where the lifetime data are censored. Censorship is a common phenomenon frequently encountered when analyzing lifetime data due to time constraints. In this paper, the flexible Weibull distribution proposed in Bebbington et al. [A flexible Weibull extension, Reliab. Eng. Syst. Safety 92 (2007), pp. 719–726] is studied using maximum likelihood technics based on three different algorithms: Newton Raphson, Levenberg Marquardt and Trust Region reflective. The proposed parameter estimation method is introduced and proved to work from theoretical and practical point of view. On one hand, we apply a maximum likelihood estimation method using complete simulated and real data. On the other hand, we study for the first time the model using simulated and real data for type I censored samples. The estimation results are approved by a statistical test.  相似文献   

11.
Pharmacokinetic (PK) data often contain concentration measurements below the quantification limit (BQL). While specific values cannot be assigned to these observations, nevertheless these observed BQL data are informative and generally known to be lower than the lower limit of quantification (LLQ). Setting BQLs as missing data violates the usual missing at random (MAR) assumption applied to the statistical methods, and therefore leads to biased or less precise parameter estimation. By definition, these data lie within the interval [0, LLQ], and can be considered as censored observations. Statistical methods that handle censored data, such as maximum likelihood and Bayesian methods, are thus useful in the modelling of such data sets. The main aim of this work was to investigate the impact of the amount of BQL observations on the bias and precision of parameter estimates in population PK models (non‐linear mixed effects models in general) under maximum likelihood method as implemented in SAS and NONMEM, and a Bayesian approach using Markov chain Monte Carlo (MCMC) as applied in WinBUGS. A second aim was to compare these different methods in dealing with BQL or censored data in a practical situation. The evaluation was illustrated by simulation based on a simple PK model, where a number of data sets were simulated from a one‐compartment first‐order elimination PK model. Several quantification limits were applied to each of the simulated data to generate data sets with certain amounts of BQL data. The average percentage of BQL ranged from 25% to 75%. Their influence on the bias and precision of all population PK model parameters such as clearance and volume distribution under each estimation approach was explored and compared. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
Four widely used statistical program packages—BMDP, SPSS, DATATEXT, and OSIRIS—were compared for computational accuracy on sample means, standard deviations, and correlations. Only one, BMDP, was not seriously inaccurate in calculations on a data set of three observations. Further, SPSS computed inaccurate statistics in a discriminant analysis on a real data set of 848 observations. It is recommended that the desk calculator algorithm, found in most of these programs, not be used in packages which may run on short word length machines.  相似文献   

13.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

14.
This article considers the statistical analysis of dependent competing risks model with incomplete data under Type-I progressive hybrid censored condition using a Marshall–Olkin bivariate Weibull distribution. Based on the expectation maximum algorithm, maximum likelihood estimators for the unknown parameters are obtained, and the missing information principle is used to obtain the observed information matrix. As the maximum likelihood approach may fail when the available information is insufficient, Bayesian approach incorporated with auxiliary variables is developed for estimating the parameters of the model, and Monte Carlo method is employed to construct the highest posterior density credible intervals. The proposed method is illustrated through a numerical example under different progressive censoring schemes and masking probabilities. Finally, a real data set is analyzed for illustrative purposes.  相似文献   

15.
The paper considers the goodness of fit tests with right censored data or doubly censored data. The Fredholm Integral Equation (FIE) method proposed by Ren (1993) is implemented in the simulation studies to estimate the null distribution of the Cramér-von Mises test statistics and the asymptotic covariance function of the self-consistent estimator for the lifetime distribution with right censored data or doubly censored data. We show that for fixed alternatives, the bootstrap method does not estimate the null distribution consistently for doubly censored data. For the right censored case, a comparison between the performance of FIE and the η out of η bootstrap shows that FIE gives better estimation for the null distribution. The application of FIE to a set of right censored Channing House data and to a set of doubly censored breast cancer data is presented.  相似文献   

16.
In this article, we consider several statistical models for censored exponential data. We prove a large deviation result for the maximum likelihood estimators (MLEs) of each model, and a unique result for the posterior distributions which works well for all the cases. Finally, comparing the large deviation rate functions for MLEs and posterior distributions, we show that a typical feature fails for one model; moreover, we illustrate the relation between this fact and a well-known result for curved exponential models.  相似文献   

17.
This article considers the problem of testing the validity of the assumption that the underlying distribution of life is Pareto. For complete and censored samples, the relationship between the Pareto and the exponential distributions could be of vital importance to test for the validity of this assumption. For grouped uncensored data the classical Pearson χ2 test based on the multinomial model can be used. Attention is confined in this article to handle grouped data with withdrawals within intervals. Graphical as well as analytical procedures will be presented. Maximum likelihood estimators for the parameters of the Pareto distribution based on grouped data will be derived.  相似文献   

18.
ABSTRACT

In modelling repeated count outcomes, generalized linear mixed-effects models are commonly used to account for within-cluster correlations. However, inconsistent results are frequently generated by various statistical R packages and SAS procedures, especially in case of a moderate or strong within-cluster correlation or overdispersion. We investigated the underlying numerical approaches and statistical theories on which these packages and procedures are built. We then compared the performance of these statistical packages and procedures by simulating both Poisson-distributed and overdispersed count data. The SAS NLMIXED procedure outperformed the others procedures in all settings.  相似文献   

19.
This paper is concerned with developing procedures for construcing confidence intervals, which would hold approximately equal tail probabilities and coverage probabilities close to the normal, for the scale parameter θ of the two-parameter exponential lifetime model when the data are time censored. We use a conditional approach to eliminate the nuisance parameter and develop several procedures based on the conditional likelihood. The methods are (a) a method based on the likelihood ratio, (b) a method based on the skewness corrected score (Bartlett, Biometrika 40 (1953), 12–19), (c) a method based on an adjustment to the signed root likelihood ratio (Diciccio, Field et al., Biometrika 77 (1990), 77–95), and (d) a method based on parameter transformation to the normal approximation. The performances of these procedures are then compared, through simulations, with the usual likelihood based procedure. The skewness corrected score procedure performs best in terms of holding both equal tail probabilities and nominal coverage probabilities even for small samples.  相似文献   

20.
This note discusses a problem that might occur when forward stepwise regression is used for variable selection and among the candidate variables is a categorical variable with more than two categories. Most software packages (such as SAS, SPSSx, BMDP) include special programs for performing stepwise regression. The user of these programs has to code categorical variables with dummy variables. In this case the forward selection might wrongly indicate that a categorical variable with more than two categories is nonsignificant. This is a disadvantage of the forward selection compared with the backward elimination method. A way to avoid the problem would be to test in a single step all dummy variables corresponding to the same categorical variable rather than one dummy variable at a time, such as in the analysis of covariance. This option, however, is not available in forward stepwise procedures, except for stepwise logistic regression in BMDP. A practical possibility is to repeat the forward stepwise regression and change the reference categories each time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号