首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The study of differences among groups is an interesting statistical topic in many applied fields. It is very common in this context to have data that are subject to mechanisms of loss of information, such as censoring and truncation. In the setting of a two‐sample problem with data subject to left truncation and right censoring, we develop an empirical likelihood method to do inference for the relative distribution. We obtain a nonparametric generalization of Wilks' theorem and construct nonparametric pointwise confidence intervals for the relative distribution. Finally, we analyse the coverage probability and length of these confidence intervals through a simulation study and illustrate their use with a real data set on gastric cancer. The Canadian Journal of Statistics 38: 453–473; 2010 © 2010 Statistical Society of Canada  相似文献   

2.
3.
Modelling of the relationship between concentration (PK) and response (PD) plays an important role in drug development. The modelling becomes complicated when the drug concentration and response measurements are not taken simultaneously and/or hysteresis occurs between the response and the concentration. A model‐based approach fits a joint pharmacokinetic (PK) and concentration–response (PK/PD) model, including an effect compartment if necessary, to concentration and response data. However, this approach relies on the PK data being well described by a common PK model. We propose an algorithm for a semi‐parametric approach to fitting nonlinear mixed PK/PD models including an effect compartment using linear interpolation and extrapolation for concentration data. This approach is independent of the PK model, and the algorithm can easily be implemented using SAS PROC NLMIXED. Practical issues in programming and computing are also discussed. The properties of this approach are examined using simulations. This approach is used to analyse data from a study of the PK/PD relationship between insulin and glucose levels. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we present a statistical inference procedure for the step-stress accelerated life testing (SSALT) model with Weibull failure time distribution and interval censoring via the formulation of generalized linear model (GLM). The likelihood function of an interval censored SSALT is in general too complicated to obtain analytical results. However, by transforming the failure time to an exponential distribution and using a binomial random variable for failure counts occurred in inspection intervals, a GLM formulation with a complementary log-log link function can be constructed. The estimations of the regression coefficients used for the Weibull scale parameter are obtained through the iterative weighted least square (IWLS) method, and the shape parameter is updated by a direct maximum likelihood (ML) estimation. The confidence intervals for these parameters are estimated through bootstrapping. The application of the proposed GLM approach is demonstrated by an industrial example.  相似文献   

5.
In this paper we develop a regression model for survival data in the presence of long-term survivors based on the generalized Gompertz distribution introduced by El-Gohary et al. [The generalized Gompertz distribution. Appl Math Model. 2013;37:13–24] in a defective version. This model includes as special case the Gompertz cure rate model proposed by Gieser et al. [Modelling cure rates using the Gompertz model with covariate information. Stat Med. 1998;17:831–839]. Next, an expectation maximization algorithm is then developed for determining the maximum likelihood estimates (MLEs) of the parameters of the model. In addition, we discuss the construction of confidence intervals for the parameters using the asymptotic distributions of the MLEs and the parametric bootstrap method, and assess their performance through a Monte Carlo simulation study. Finally, the proposed methodology was applied to a database on uterine cervical cancer.  相似文献   

6.
In preclinical and clinical experiments, pharmacokinetic (PK) studies are designed to analyse the evolution of drug concentration in plasma over time i.e. the PK profile. Some PK parameters are estimated in order to summarize the complete drug's kinetic profile: area under the curve (AUC), maximal concentration (C(max)), time at which the maximal concentration occurs (t(max)) and half-life time (t(1/2)).Several methods have been proposed to estimate these PK parameters. A first method relies on interpolating between observed concentrations. The interpolation method is often chosen linear. This method is simple and fast. Another method relies on compartmental modelling. In this case, nonlinear methods are used to estimate parameters of a chosen compartmental model. This method provides generally good results. However, if the data are sparse and noisy, two difficulties can arise with this method. The first one is related to the choice of the suitable compartmental model given the small number of data available in preclinical experiment for instance. Second, nonlinear methods can fail to converge. Much work has been done recently to circumvent these problems (J. Pharmacokinet. Pharmacodyn. 2007; 34:229-249, Stat. Comput., to appear, Biometrical J., to appear, ESAIM P&S 2004; 8:115-131).In this paper, we propose a Bayesian nonparametric model based on P-splines. This method provides good PK parameters estimation, whatever be the number of available observations and the level of noise in the data. Simulations show that the proposed method provides better PK parameters estimations than the interpolation method, both in terms of bias and precision. The Bayesian nonparametric method provides also better AUC and t(1/2) estimations than a correctly specified compartmental model, whereas this last method performs better in t(max) and C(max) estimations.We extend the basic model to a hierarchical one that treats the case where we have concentrations from different subjects. We are then able to get individual PK parameter estimations. Finally, with Bayesian methods, we can get easily some uncertainty measures by obtaining credibility sets for each PK parameter.  相似文献   

7.
8.
Adaptive Type-II progressive censoring schemes have been shown to be useful in striking a balance between statistical estimation efficiency and the time spent on a life-testing experiment. In this article, some general statistical properties of an adaptive Type-II progressive censoring scheme are first investigated. A bias correction procedure is proposed to reduce the bias of the maximum likelihood estimators (MLEs). We then focus on the extreme value distributed lifetimes and derive the Fisher information matrix for the MLEs based on these properties. Four different approaches are proposed to construct confidence intervals for the parameters of the extreme value distribution. Performance of these methods is compared through an extensive Monte Carlo simulation.  相似文献   

9.
The Poisson–Lindley distribution is a compound discrete distribution that can be used as an alternative to other discrete distributions, like the negative binomial. This paper develops approximate one-sided and equal-tailed two-sided tolerance intervals for the Poisson–Lindley distribution. Practical applications of the Poisson–Lindley distribution frequently involve large samples, thus we utilize large-sample Wald confidence intervals in the construction of our tolerance intervals. A coverage study is presented to demonstrate the efficacy of the proposed tolerance intervals. The tolerance intervals are also demonstrated using two real data sets. The R code developed for our discussion is briefly highlighted and included in the tolerance package.  相似文献   

10.
This paper considers the statistical analysis for competing risks model under the Type-I progressively hybrid censoring from a Weibull distribution. We derive the maximum likelihood estimates and the approximate maximum likelihood estimates of the unknown parameters. We then use the bootstrap method to construct the confidence intervals. Based on the non informative prior, a sampling algorithm using the acceptance–rejection sampling method is presented to obtain the Bayes estimates, and Monte Carlo method is employed to construct the highest posterior density credible intervals. The simulation results are provided to show the effectiveness of all the methods discussed here and one data set is analyzed.  相似文献   

11.
We consider estimation of the unknown parameters of Chen distribution [Chen Z. A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Statist Probab Lett. 2000;49:155–161] with bathtub shape using progressive-censored samples. We obtain maximum likelihood estimates by making use of an expectation–maximization algorithm. Different Bayes estimates are derived under squared error and balanced squared error loss functions. It is observed that the associated posterior distribution appears in an intractable form. So we have used an approximation method to compute these estimates. A Metropolis–Hasting algorithm is also proposed and some more approximate Bayes estimates are obtained. Asymptotic confidence interval is constructed using observed Fisher information matrix. Bootstrap intervals are proposed as well. Sample generated from MH algorithm are further used in the construction of HPD intervals. Finally, we have obtained prediction intervals and estimates for future observations in one- and two-sample situations. A numerical study is conducted to compare the performance of proposed methods using simulations. Finally, we analyse real data sets for illustration purposes.  相似文献   

12.
In this paper, we consider three distribution-free confidence intervals for quantiles given joint records from two independent sequences of continuous random variables with a common continuous distribution function. The coverage probabilities of these intervals are compared. We then compute the universal bounds of the expected widths of the proposed confidence intervals. These results naturally extend to any number of independent sequences instead of just two. Finally, the proposed confidence intervals are applied for a real data set to illustrate the practical usefulness of the procedures developed here.  相似文献   

13.
ABSTRACT

In this paper, we consider the problem of constructing non parametric confidence intervals for the mean of a positively skewed distribution. We suggest calibrated, smoothed bootstrap upper and lower percentile confidence intervals. For the theoretical properties, we show that the proposed one-sided confidence intervals have coverage probability α + O(n? 3/2). This is an improvement upon the traditional bootstrap confidence intervals in terms of coverage probability. A version smoothed approach is also considered for constructing a two-sided confidence interval and its theoretical properties are also studied. A simulation study is performed to illustrate the performance of our confidence interval methods. We then apply the methods to a real data set.  相似文献   

14.
A progressive hybrid censoring scheme is a mixture of type-I and type-II progressive censoring schemes. In this paper, we mainly consider the analysis of progressive type-II hybrid-censored data when the lifetime distribution of the individual item is the normal and extreme value distributions. Since the maximum likelihood estimators (MLEs) of these parameters cannot be obtained in the closed form, we propose to use the expectation and maximization (EM) algorithm to compute the MLEs. Also, the Newton–Raphson method is used to estimate the model parameters. The asymptotic variance–covariance matrix of the MLEs under EM framework is obtained by Fisher information matrix using the missing information and asymptotic confidence intervals for the parameters are then constructed. This study will end up with comparing the two methods of estimation and the asymptotic confidence intervals of coverage probabilities corresponding to the missing information principle and the observed information matrix through a simulation study, illustrated examples and real data analysis.  相似文献   

15.
ABSTRACT

In this article, a two-parameter generalized inverse Lindley distribution capable of modeling a upside-down bathtub-shaped hazard rate function is introduced. Some statistical properties of proposed distribution are explicitly derived here. The method of maximum likelihood, least square, and maximum product spacings are used for estimating the unknown model parameters and also compared through the simulation study. The approximate confidence intervals, based on a normal and a log-normal approximation, are also computed. Two algorithms are proposed for generating a random sample from the proposed distribution. A real data set is modeled to illustrate its applicability, and it is shown that our distribution fits much better than some other existing inverse distributions.  相似文献   

16.
Pharmacokinetic studies are commonly performed using the two-stage approach. The first stage involves estimation of pharmacokinetic parameters such as the area under the concentration versus time curve (AUC) for each analysis subject separately, and the second stage uses the individual parameter estimates for statistical inference. This two-stage approach is not applicable in sparse sampling situations where only one sample is available per analysis subject similar to that in non-clinical in vivo studies. In a serial sampling design, only one sample is taken from each analysis subject. A simulation study was carried out to assess coverage, power and type I error of seven methods to construct two-sided 90% confidence intervals for ratios of two AUCs assessed in a serial sampling design, which can be used to assess bioequivalence in this parameter.  相似文献   

17.
This paper develops a smoothed empirical likelihood (SEL)-based method to construct confidence intervals for quantile regression parameters with auxiliary information. First, we define the SEL ratio and show that it follows a Chi-square distribution. We then construct confidence intervals according to this ratio. Finally, Monte Carlo experiments are employed to evaluate the proposed method.  相似文献   

18.
Non-central chi-squared distribution plays a vital role in statistical testing procedures. Estimation of the non-centrality parameter provides valuable information for the power calculation of the associated test. We are interested in the statistical inference property of the non-centrality parameter estimate based on one observation (usually a summary statistic) from a truncated chi-squared distribution. This work is motivated by the application of the flexible two-stage design in case–control studies, where the sample size needed for the second stage of a two-stage study can be determined adaptively by the results of the first stage. We first study the moment estimate for the truncated distribution and prove its existence, uniqueness, and inadmissibility and convergence properties. We then define a new class of estimates that includes the moment estimate as a special case. Among this class of estimates, we recommend to use one member that outperforms the moment estimate in a wide range of scenarios. We also present two methods for constructing confidence intervals. Simulation studies are conducted to evaluate the performance of the proposed point and interval estimates.  相似文献   

19.
In the linear regression model, the asymptotic distributions of certain functions of confidence bounds of a class of confidence intervals for the regression parameter arc investigated. The class of confidence intervals we consider in this paper are based on the usual linear rank statistics (signed as well as unsigned). Under suitable assumptions, if the confidence intervals are based on the signed linear rank statistics, it is established that the lengths, properly normalized, of the confidence intervals converge in law to the standard normal distributions; if the confidence intervals arc based on the unsigned linear rank statistics, it is then proved that a linear function of the confidence bounds converges in law to a normal distribution.  相似文献   

20.
Diagnostic techniques are proposed for assessing the influence of individual cases on confidence intervals in nonlinear regression. The technique proposed uses the method of profile t-plots applied to the case-deletion model. The effect of the geometry of the statistical model on the influence measures is assessed, and an algorithm for computing case-deleted confidence intervals is described. This algorithm provides a direct method for constructing a simple diagnostic measure based on the ratio of the lengths of confidence intervals. The generalization of these methods to multiresponse models is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号