首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study.  相似文献   

2.
In survival data analysis, the interval censoring problem has generally been treated via likelihood methods. Because this likelihood is complex, it is often assumed that the censoring mechanisms do not affect the mortality process. The authors specify conditions that ensure the validity of such a simplified likelihood. They prove the equivalence between different characterizations of noninformative censoring and define a constant‐sum condition analogous to the one derived in the context of right censoring. They also prove that when the noninformative or constant‐sum condition holds, the simplified likelihood can be used to obtain the nonparametric maximum likelihood estimator of the death time distribution function.  相似文献   

3.
Much of the small‐area estimation literature focuses on population totals and means. However, users of survey data are often interested in the finite‐population distribution of a survey variable and in the measures (e.g. medians, quartiles, percentiles) that characterize the shape of this distribution at the small‐area level. In this paper we propose a model‐based direct estimator (MBDE, Chandra and Chambers) of the small‐area distribution function. The MBDE is defined as a weighted sum of sample data from the area of interest, with weights derived from the calibrated spline‐based estimate of the finite‐population distribution function introduced by Harms and Duchesne, under an appropriately specified regression model with random area effects. We also discuss the mean squared error estimation of the MBDE. Monte Carlo simulations based on both simulated and real data sets show that the proposed MBDE and its associated mean squared error estimator perform well when compared with alternative estimators of the area‐specific finite‐population distribution function.  相似文献   

4.
The quantile residual lifetime function provides comprehensive quantitative measures for residual life, especially when the distribution of the latter is skewed or heavy‐tailed and/or when the data contain outliers. In this paper, we propose a general class of semiparametric quantile residual life models for length‐biased right‐censored data. We use the inverse probability weighted method to correct the bias due to length‐biased sampling and informative censoring. Two estimating equations corresponding to the quantile regressions are constructed in two separate steps to obtain an efficient estimator. Consistency and asymptotic normality of the estimator are established. The main difficulty in implementing our proposed method is that the estimating equations associated with the quantiles are nondifferentiable, and we apply the majorize–minimize algorithm and estimate the asymptotic covariance using an efficient resampling method. We use simulation studies to evaluate the proposed method and illustrate its application by a real‐data example.  相似文献   

5.
The analysis of time‐to‐event data typically makes the censoring at random assumption, ie, that—conditional on covariates in the model—the distribution of event times is the same, whether they are observed or unobserved (ie, right censored). When patients who remain in follow‐up stay on their assigned treatment, then analysis under this assumption broadly addresses the de jure, or “while on treatment strategy” estimand. In such cases, we may well wish to explore the robustness of our inference to more pragmatic, de facto or “treatment policy strategy,” assumptions about the behaviour of patients post‐censoring. This is particularly the case when censoring occurs because patients change, or revert, to the usual (ie, reference) standard of care. Recent work has shown how such questions can be addressed for trials with continuous outcome data and longitudinal follow‐up, using reference‐based multiple imputation. For example, patients in the active arm may have their missing data imputed assuming they reverted to the control (ie, reference) intervention on withdrawal. Reference‐based imputation has two advantages: (a) it avoids the user specifying numerous parameters describing the distribution of patients' postwithdrawal data and (b) it is, to a good approximation, information anchored, so that the proportion of information lost due to missing data under the primary analysis is held constant across the sensitivity analyses. In this article, we build on recent work in the survival context, proposing a class of reference‐based assumptions appropriate for time‐to‐event data. We report a simulation study exploring the extent to which the multiple imputation estimator (using Rubin's variance formula) is information anchored in this setting and then illustrate the approach by reanalysing data from a randomized trial, which compared medical therapy with angioplasty for patients presenting with angina.  相似文献   

6.
Single cohort stage‐frequency data are considered when assessing the stage reached by individuals through destructive sampling. For this type of data, when all hazard rates are assumed constant and equal, Laplace transform methods have been applied in the past to estimate the parameters in each stage‐duration distribution and the overall hazard rates. If hazard rates are not all equal, estimating stage‐duration parameters using Laplace transform methods becomes complex. In this paper, two new models are proposed to estimate stage‐dependent maturation parameters using Laplace transform methods where non‐trivial hazard rates apply. The first model encompasses hazard rates that are constant within each stage but vary between stages. The second model encompasses time‐dependent hazard rates within stages. Moreover, this paper introduces a method for estimating the hazard rate in each stage for the stage‐wise constant hazard rates model. This work presents methods that could be used in specific types of laboratory studies, but the main motivation is to explore the relationships between stage maturation parameters that, in future work, could be exploited in applying Bayesian approaches. The application of the methodology in each model is evaluated using simulated data in order to illustrate the structure of these models.  相似文献   

7.
The authors propose graphical and numerical methods for checking the adequacy of the logistic regression model for matched case‐control data. Their approach is based on the cumulative sum of residuals over the covariate or linear predictor. Under the assumed model, the cumulative residual process converges weakly to a centered Gaussian limit whose distribution can be approximated via computer simulation. The observed cumulative residual pattern can then be compared both visually and analytically to a certain number of simulated realizations of the approximate limiting process under the null hypothesis. The proposed techniques allow one to check the functional form of each covariate, the logistic link function as well as the overall model adequacy. The authors assess the performance of the proposed methods through simulation studies and illustrate them using data from a cardiovascular study.  相似文献   

8.
Nonlinear mixed‐effects models are being widely used for the analysis of longitudinal data, especially from pharmaceutical research. They use random effects which are latent and unobservable variables so the random‐effects distribution is subject to misspecification in practice. In this paper, we first study the consequences of misspecifying the random‐effects distribution in nonlinear mixed‐effects models. Our study is focused on Gauss‐Hermite quadrature, which is now the routine method for calculation of the marginal likelihood in mixed models. We then present a formal diagnostic test to check the appropriateness of the assumed random‐effects distribution in nonlinear mixed‐effects models, which is very useful for real data analysis. Our findings show that the estimates of fixed‐effects parameters in nonlinear mixed‐effects models are generally robust to deviations from normality of the random‐effects distribution, but the estimates of variance components are very sensitive to the distributional assumption of random effects. Furthermore, a misspecified random‐effects distribution will either overestimate or underestimate the predictions of random effects. We illustrate the results using a real data application from an intensive pharmacokinetic study.  相似文献   

9.
In an affected‐sib‐pair genetic linkage analysis, identical by descent data for affected sib pairs are routinely collected at a large number of markers along chromosomes. Under very general genetic assumptions, the IBD distribution at each marker satisfies the possible triangle constraint. Statistical analysis of IBD data should thus utilize this information to improve efficiency. At the same time, this constraint renders the usual regularity conditions for likelihood‐based statistical methods unsatisfied. In this paper, the authors study the asymptotic properties of the likelihood ratio test (LRT) under the possible triangle constraint. They derive the limiting distribution of the LRT statistic based on data from a single locus. They investigate the precision of the asymptotic distribution and the power of the test by simulation. They also study the test based on the supremum of the LRT statistics over the markers distributed throughout a chromosome. Instead of deriving a limiting distribution for this test, they use a mixture of chi‐squared distributions to approximate its true distribution. Their simulation results show that this approach has desirable simplicity and satisfactory precision.  相似文献   

10.
A consistent approach to the problem of testing non‐correlation between two univariate infinite‐order autoregressive models was proposed by Hong (1996). His test is based on a weighted sum of squares of residual cross‐correlations, with weights depending on a kernel function. In this paper, the author follows Hong's approach to test non‐correlation of two cointegrated (or partially non‐stationary) ARMA time series. The test of Pham, Roy & Cédras (2003) may be seen as a special case of his approach, as it corresponds to the choice of a truncated uniform kernel. The proposed procedure remains valid for testing non‐correlation between two stationary invertible multivariate ARMA time series. The author derives the asymptotic distribution of his test statistics under the null hypothesis and proves that his procedures are consistent. He also studies the level and power of his proposed tests in finite samples through simulation. Finally, he presents an illustration based on real data.  相似文献   

11.
On the basis of the idea of the Nadaraya–Watson (NW) kernel smoother and the technique of the local linear (LL) smoother, we construct the NW and LL estimators of conditional mean functions and their derivatives for a left‐truncated and right‐censored model. The target function includes the regression function, the conditional moment and the conditional distribution function as special cases. It is assumed that the lifetime observations with covariates form a stationary α‐mixing sequence. Asymptotic normality of the estimators is established. Finite sample behaviour of the estimators is investigated via simulations. A real data illustration is included too.  相似文献   

12.
In the analysis of semi‐competing risks data interest lies in estimation and inference with respect to a so‐called non‐terminal event, the observation of which is subject to a terminal event. Multi‐state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non‐terminal and terminal events specified, in part, by a unit‐specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi‐competing risks analysis that permit the non‐parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi‐parametric efficient score under the complete data setting and propose a non‐parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small‐sample operating characteristics evaluated via simulation. Although the proposed semi‐parametric transformation model and non‐parametric score imputation method are motivated by the analysis of semi‐competing risks data, they are broadly applicable to any analysis of multivariate time‐to‐event outcomes in which a unit‐specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer.  相似文献   

13.
ABSTRACT

The binomial exponential 2 (BE2) distribution was proposed by Bakouch et al. as a distribution of a random sum of independent exponential random variables, when the sample size has a zero truncated binomial distribution. In this article, we introduce a generalization of BE2 distribution which offers a more flexible model for lifetime data than the BE2 distribution. The hazard rate function of the proposed distribution can be decreasing, increasing, decreasing–increasing–decreasing and unimodal, so it turns out to be quite flexible for analyzing non-negative real life data. Some statistical properties and parameters estimation of the distribution are investigated. Three different algorithms are proposed for generating random data from the new distribution. Two real data applications regarding the strength data and Proschan's air-conditioner data are used to show that the new distribution is better than the BE2 distribution and some other well-known distributions in modeling lifetime data.  相似文献   

14.
Abstract. We propose a non‐parametric change‐point test for long‐range dependent data, which is based on the Wilcoxon two‐sample test. We derive the asymptotic distribution of the test statistic under the null hypothesis that no change occurred. In a simulation study, we compare the power of our test with the power of a test which is based on differences of means. The results of the simulation study show that in the case of Gaussian data, our test has only slightly smaller power minus.3pt than the ‘difference‐of‐means’ test. For heavy‐tailed data, our test outperforms the ‘difference‐of‐means’ test.  相似文献   

15.
In statistical modelling, it is often of interest to evaluate non‐negative quantities that capture heterogeneity in the population such as variances, mixing proportions and dispersion parameters. In instances of covariate‐dependent heterogeneity, the implied homogeneity hypotheses are nonstandard and existing inferential techniques are not applicable. In this paper, we develop a quasi‐score test statistic to evaluate homogeneity against heterogeneity that varies with a covariate profile through a regression model. We establish the limiting null distribution of the proposed test as a functional of mixtures of chi‐square processes. The methodology does not require the full distribution of the data to be entirely specified. Instead, a general estimating function for a finite dimensional component of the model, that is, of interest is assumed but other characteristics of the population are left completely unspecified. We apply the methodology to evaluate the excess zero proportion in zero‐inflated models for count data. Our numerical simulations show that the proposed test can greatly improve efficiency over tests of homogeneity that neglect covariate information under the alternative hypothesis. An empirical application to dental caries indices demonstrates the importance and practical utility of the methodology in detecting excess zeros in the data.  相似文献   

16.
We introduce and study the so-called Kumaraswamy generalized gamma distribution that is capable of modeling bathtub-shaped hazard rate functions. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a large number of well-known lifetime special sub-models such as the exponentiated generalized gamma, exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma, generalized Rayleigh, among others. Some structural properties of the new distribution are studied. We obtain two infinite sum representations for the moments and an expansion for the generating function. We calculate the density function of the order statistics and an expansion for their moments. The method of maximum likelihood and a Bayesian procedure are adopted for estimating the model parameters. The usefulness of the new distribution is illustrated in two real data sets.  相似文献   

17.
The process comparing the empirical cumulative distribution function of the sample with a parametric estimate of the cumulative distribution function is known as the empirical process with estimated parameters and has been extensively employed in the literature for goodness‐of‐fit testing. The simplest way to carry out such goodness‐of‐fit tests, especially in a multivariate setting, is to use a parametric bootstrap. Although very easy to implement, the parametric bootstrap can become very computationally expensive as the sample size, the number of parameters, or the dimension of the data increase. An alternative resampling technique based on a fast weighted bootstrap is proposed in this paper, and is studied both theoretically and empirically. The outcome of this work is a generic and computationally efficient multiplier goodness‐of‐fit procedure that can be used as a large‐sample alternative to the parametric bootstrap. In order to approximately determine how large the sample size needs to be for the parametric and weighted bootstraps to have roughly equivalent powers, extensive Monte Carlo experiments are carried out in dimension one, two and three, and for models containing up to nine parameters. The computational gains resulting from the use of the proposed multiplier goodness‐of‐fit procedure are illustrated on trivariate financial data. A by‐product of this work is a fast large‐sample goodness‐of‐fit procedure for the bivariate and trivariate t distribution whose degrees of freedom are fixed. The Canadian Journal of Statistics 40: 480–500; 2012 © 2012 Statistical Society of Canada  相似文献   

18.
In this paper, we present an innovative method for constructing proper priors for the skewness (shape) parameter in the skew‐symmetric family of distributions. The proposed method is based on assigning a prior distribution on the perturbation effect of the shape parameter, which is quantified in terms of the total variation distance. We discuss strategies to translate prior beliefs about the asymmetry of the data into an informative prior distribution of this class. We show via a Monte Carlo simulation study that our non‐informative priors induce posterior distributions with good frequentist properties, similar to those of the Jeffreys prior. Our informative priors yield better results than their competitors from the literature. We also propose a scale‐invariant and location‐invariant prior structure for models with unknown location and scale parameters and provide sufficient conditions for the propriety of the corresponding posterior distribution. Illustrative examples are presented using simulated and real data.  相似文献   

19.
A four-parameter extension of the generalized gamma distribution capable of modelling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma and generalized Rayleigh, among others. We derive two infinite sum representations for its moments. We calculate the density of the order statistics and two expansions for their moments. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is obtained. Finally, a real data set from the medical area is analysed.  相似文献   

20.
The Lasso has sparked interest in the use of penalization of the log‐likelihood for variable selection, as well as for shrinkage. We are particularly interested in the more‐variables‐than‐observations case of characteristic importance for modern data. The Bayesian interpretation of the Lasso as the maximum a posteriori estimate of the regression coefficients, which have been given independent, double exponential prior distributions, is adopted. Generalizing this prior provides a family of hyper‐Lasso penalty functions, which includes the quasi‐Cauchy distribution of Johnstone and Silverman as a special case. The properties of this approach, including the oracle property, are explored, and an EM algorithm for inference in regression problems is described. The posterior is multi‐modal, and we suggest a strategy of using a set of perfectly fitting random starting values to explore modes in different regions of the parameter space. Simulations show that our procedure provides significant improvements on a range of established procedures, and we provide an example from chemometrics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号