首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Models for dealing with survival data in the presence of a cured fraction of individuals have attracted the attention of many researchers and practitioners in recent years. In this paper, we propose a cure rate model under the competing risks scenario. For the number of causes that can lead to the event of interest, we assume the polylogarithm distribution. The model is flexible in the sense it encompasses some well-known models, which can be tested using large sample test statistics applied to nested models. Maximum-likelihood estimation based on the EM algorithm and hypothesis testing are investigated. Results of simulation studies designed to gauge the performance of the estimation method and of two test statistics are reported. The methodology is applied in the analysis of a data set.  相似文献   

2.
In this paper we introduce a binary search algorithm that efficiently finds initial maximum likelihood estimates for sequential experiments where a binary response is modeled by a continuous factor. The problem is motivated by switching measurements on superconducting Josephson junctions. In this quantum mechanical experiment, the current is the factor controlled by the experimenter and a binary response indicating the presence or the absence of a voltage response is measured. The prior knowledge on the model parameters is typically poor, which may cause the common approaches of initial estimation to fail. The binary search algorithm is designed to work reliably even when the prior information is very poor. The properties of the algorithm are studied in simulations and an advantage over the initial estimation with equally spaced factor levels is demonstrated. We also study the cost-efficiency of the binary search algorithm and find the approximately optimal number of measurements per stage when there is a cost related to the number of stages in the experiment.  相似文献   

3.
The long computational time required in constructing optimal designs for computer experiments has limited their uses in practice. In this paper, a new algorithm for constructing optimal experimental designs is developed. There are two major developments involved in this work. One is on developing an efficient global optimal search algorithm, named as enhanced stochastic evolutionary (ESE) algorithm. The other is on developing efficient methods for evaluating optimality criteria. The proposed algorithm is compared to existing techniques and found to be much more efficient in terms of the computation time, the number of exchanges needed for generating new designs, and the achieved optimality criteria. The algorithm is also very flexible to construct various classes of optimal designs to retain certain desired structural properties.  相似文献   

4.
This article develops a framework for estimating multivariate treatment effect models in the presence of sample selection. The methodology deals with several important issues prevalent in policy and program evaluation, including application and approval stages, nonrandom treatment assignment, endogeneity, and discrete outcomes. This article presents a computationally efficient estimation algorithm and techniques for model comparison and treatment effects. The framework is applied to evaluate the effectiveness of bank recapitalization programs and their ability to resuscitate the financial system. The analysis of lender of last resort (LOLR) policies is not only complicated due to econometric challenges, but also because regulator data are not easily obtainable. Motivated by these difficulties, this article constructs a novel bank-level dataset and employs the new methodology to jointly model a bank’s decision to apply for assistance, the LOLR’s decision to approve or decline the assistance, and the bank’s performance following the disbursements. The article offers practical estimation tools to unveil new answers to important regulatory and policy questions.  相似文献   

5.
In this paper, we propose a flexible cure rate survival model by assuming that the number of competing causes of the event of interest follows the Negative Binomial distribution and the time to event follows a Weibull distribution. Indeed, we introduce the Weibull-Negative-Binomial (WNB) distribution, which can be used in order to model survival data when the hazard rate function is increasing, decreasing and some non-monotonous shaped. Another advantage of the proposed model is that it has some distributions commonly used in lifetime analysis as particular cases. Moreover, the proposed model includes as special cases some of the well-know cure rate models discussed in the literature. We consider a frequentist analysis for parameter estimation of a WNB model with cure rate. Then, we derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and present some ways to perform global influence analysis. Finally, the methodology is illustrated on a medical data.  相似文献   

6.
Missing covariates data is a common issue in generalized linear models (GLMs). A model-based procedure arising from properly specifying joint models for both the partially observed covariates and the corresponding missing indicator variables represents a sound and flexible methodology, which lends itself to maximum likelihood estimation as the likelihood function is available in computable form. In this paper, a novel model-based methodology is proposed for the regression analysis of GLMs when the partially observed covariates are categorical. Pair-copula constructions are used as graphical tools in order to facilitate the specification of the high-dimensional probability distributions of the underlying missingness components. The model parameters are estimated by maximizing the weighted log-likelihood function by using an EM algorithm. In order to compare the performance of the proposed methodology with other well-established approaches, which include complete-cases and multiple imputation, several simulation experiments of Binomial, Poisson and Normal regressions are carried out under both missing at random and non-missing at random mechanisms scenarios. The methods are illustrated by modeling data from a stage III melanoma clinical trial. The results show that the methodology is rather robust and flexible, representing a competitive alternative to traditional techniques.  相似文献   

7.
This paper applies methodology of Finkelstein and Schoenfeld [Stat. Med. 13 (1994) 1747.] to consider new treatment strategies in a synthetic clinical trial. The methodology is an approach for estimating survival functions as a composite of subdistributions defined by an auxiliary event which is intermediate to the failure. The subdistributions are usually calculated utilizing all subjects in a study, by taking into account the path determined by each individual's auxiliary event. However, the method can be used to get a composite estimate of failure from different subpopulations of patients. We utilize this application of the methodology to test a new treatment strategy, that changes therapy at later stages of disease, by combining subdistributions from different treatment arms of a clinical trial that was conducted to test therapies for prevention of pneumocystis carinii pneumonia.  相似文献   

8.
We propose a flexible functional approach for modelling generalized longitudinal data and survival time using principal components. In the proposed model the longitudinal observations can be continuous or categorical data, such as Gaussian, binomial or Poisson outcomes. We generalize the traditional joint models that treat categorical data as continuous data by using some transformations, such as CD4 counts. The proposed model is data-adaptive, which does not require pre-specified functional forms for longitudinal trajectories and automatically detects characteristic patterns. The longitudinal trajectories observed with measurement error or random error are represented by flexible basis functions through a possibly nonlinear link function, combining dimension reduction techniques resulting from functional principal component (FPC) analysis. The relationship between the longitudinal process and event history is assessed using a Cox regression model. Although the proposed model inherits the flexibility of non-parametric methods, the estimation procedure based on the EM algorithm is still parametric in computation, and thus simple and easy to implement. The computation is simplified by dimension reduction for random coefficients or FPC scores. An iterative selection procedure based on Akaike information criterion (AIC) is proposed to choose the tuning parameters, such as the knots of spline basis and the number of FPCs, so that appropriate degree of smoothness and fluctuation can be addressed. The effectiveness of the proposed approach is illustrated through a simulation study, followed by an application to longitudinal CD4 counts and survival data which were collected in a recent clinical trial to compare the efficiency and safety of two antiretroviral drugs.  相似文献   

9.
10.
A long-standing problem in clinical research is distinguishing drug treated subjects that respond due to specific effects of the drug from those that respond to non-specific (or placebo) effects of the treatment. Linear mixed effect models are commonly used to model longitudinal clinical trial data. In this paper we present a solution to the problem of identifying placebo responders using an optimal partitioning methodology for linear mixed effects models. Since individual outcomes in a longitudinal study correspond to curves, the optimal partitioning methodology produces a set of prototypical outcome profiles. The optimal partitioning methodology can accommodate both continuous and discrete covariates. The proposed partitioning strategy is compared and contrasted with the growth mixture modelling approach. The methodology is applied to a two-phase depression clinical trial where subjects in a first phase were treated openly for 12 weeks with fluoxetine followed by a double blind discontinuation phase where responders to treatment in the first phase were randomized to either stay on fluoxetine or switched to a placebo. The optimal partitioning methodology is applied to the first phase to identify prototypical outcome profiles. Using time to relapse in the second phase of the study, a survival analysis is performed on the partitioned data. The optimal partitioning results identify prototypical profiles that distinguish whether subjects relapse depending on whether or not they stay on the drug or are randomized to a placebo.  相似文献   

11.
It is well known that it is difficult to construct minimax optimal designs. Furthermore, since in practice we never know the true error variance, it is important to allow small deviations and construct robust optimal designs. We investigate a class of minimax optimal regression designs for models with heteroscedastic errors that are robust against possible misspecification of the error variance. Commonly used A-, c-, and I-optimality criteria are included in this class of minimax optimal designs. Several theoretical results are obtained, including a necessary condition and a reflection symmetry for these minimax optimal designs. In this article, we focus mainly on linear models and assume that an approximate error variance function is available. However, we also briefly discuss how the methodology works for nonlinear models. We then propose an effective algorithm to solve challenging nonconvex optimization problems to find minimax designs on discrete design spaces. Examples are given to illustrate minimax optimal designs and their properties.  相似文献   

12.
We analyse MCMC chains focusing on how to find simulation parameters that give good mixing for discrete time, Harris ergodic Markov chains on a general state space X having invariant distribution π. The analysis uses an upper bound for the variance of the probability estimate. For each simulation parameter set, the bound is estimated from an MCMC chain using recurrence intervals. Recurrence intervals are a generalization of recurrence periods for discrete Markov chains. It is easy to compare the mixing properties for different simulation parameters. The paper gives general advice on how to improve the mixing of the MCMC chains and a new methodology for how to find an optimal acceptance rate for the Metropolis-Hastings algorithm. Several examples, both toy examples and large complex ones, illustrate how to apply the methodology in practice. We find that the optimal acceptance rate is smaller than the general recommendation in the literature in some of these examples.  相似文献   

13.
In this paper, we consider an inspection policy problem for a one-shot system with two types of units over a finite time span and want to determine inspection intervals optimally with given replacement points of Type 2 units. The interval availability and life cycle cost are used as optimization criteria and estimated by simulation. Two optimization models are proposed to find the optimal inspection intervals for the exponential and general distributions. A heuristic method and a genetic algorithm are proposed to find the near-optimal inspection intervals, to satisfy the target interval availability and minimize the life-cycle cost. We study numerical examples to compare the heuristic method with the genetic algorithm and investigate the effect of model parameters to the optimal solutions.  相似文献   

14.
We show how mutually utility independent hierarchies, which weigh the various costs of an experiment against benefits expressed through a mixed Bayes linear utility representing the potential gains in knowledge from the experiment, provide a flexible and intuitive methodology for experimental design which remains tractable even for complex multivariate problems. A key feature of the approach is that we allow imprecision in the trade-offs between the various costs and benefits. We identify the Pareto optimal designs under the imprecise specification and suggest a criterion for selecting between such designs. The approach is illustrated with respect to an experiment related to the oral glucose tolerance test.  相似文献   

15.
The family of power series cure rate models provides a flexible modeling framework for survival data of populations with a cure fraction. In this work, we present a simplified estimation procedure for the maximum likelihood (ML) approach. ML estimates are obtained via the expectation-maximization (EM) algorithm where the expectation step involves computation of the expected number of concurrent causes for each individual. It has the big advantage that the maximization step can be decomposed into separate maximizations of two lower-dimensional functions of the regression and survival distribution parameters, respectively. Two simulation studies are performed: the first to investigate the accuracy of the estimation procedure for different numbers of covariates and the second to compare our proposal with the direct maximization of the observed log-likelihood function. Finally, we illustrate the technique for parameter estimation on a dataset of survival times for patients with malignant melanoma.  相似文献   

16.
We present a methodology for rating in real-time the creditworthiness of public companies in the U.S. from the prices of traded assets. Our approach uses asset pricing data to impute a term structure of risk neutral survival functions or default probabilities. Firms are then clustered into ratings categories based on their survival functions using a functional clustering algorithm. This allows all public firms whose assets are traded to be directly rated by market participants. For firms whose assets are not traded, we show how they can be indirectly rated by matching them to firms that are traded based on observable characteristics. We also show how the resulting ratings can be used to construct loss distributions for portfolios of bonds. Finally, we compare our ratings to Standard & Poors and find that, over the period 2005 to 2011, our ratings lead theirs for firms that ultimately default.  相似文献   

17.
We propose a flexible prior model for the parameters of binary Markov random fields (MRF), defined on rectangular lattices and with maximal cliques defined from a template maximal clique. The prior model allows higher‐order interactions to be included. We also define a reversible jump Markov chain Monte Carlo algorithm to sample from the associated posterior distribution. The number of possible parameters for a higher‐order MRF becomes high, even for small template maximal cliques. We define a flexible parametric form where the parameters have interpretation as potentials for clique configurations, and limit the effective number of parameters by assigning apriori discrete probabilities for events where groups of parameter values are equal. To cope with the computationally intractable normalising constant of MRFs, we adopt a previously defined approximation of binary MRFs. We demonstrate the flexibility of our prior formulation with simulated and real data examples.  相似文献   

18.
This article proposes an adaptive sequential preventive maintenance (PM) policy for which an improvement factor is newly introduced to measure the PM effect at each PM. For this model, the PM actions are conducted at different time intervals so that an adaptive method needs to be utilized to determine the optimal PM times minimizing the expected cost rate per unit time. At each PM, the hazard rate is reduced by an amount affected by the improvement factor which depends on the number of PM's preceding the current one. We derive mathematical formulas to evaluate the expected cost rate per unit time by incorporating the PM cost, repair cost, and replacement cost. Assuming that the failure times follow a Weibull distribution, we propose an optimal sequential PM policy by minimizing the expected cost rate. Furthermore, we consider Bayesian aspects for the sequential PM policy to discuss its optimality. The effect of some parameters and the functional forms of improvement factor on the optimal PM policy is measured numerically by sensibility analysis and some numerical examples are presented for illustrative purposes.  相似文献   

19.
We consider the optimal design of controlled experimental epidemics or transmission experiments, whose purpose is to inform the practitioner about disease transmission and recovery rates. Our methodology employs Gaussian diffusion approximations, applicable to epidemics that can be modeled as density-dependent Markov processes and involving relatively large numbers of organisms. We focus on finding (i) the optimal times at which to collect data about the state of the system for a small number of discrete observations, (ii) the optimal numbers of susceptible and infective individuals to begin an experiment with, and (iii) the optimal number of replicate epidemics to use. We adopt the popular D-optimality criterion as providing an appropriate objective function for designing our experiments, since this leads to estimates with maximum precision, subject to valid assumptions about parameter values. We demonstrate the broad applicability of our methodology using a diverse array of compartmental epidemic models: a time-homogeneous SIS epidemic, a time-inhomogeneous SI epidemic with exponentially decreasing transmission rates and a partially observed SIR epidemic where the infectious period for an individual has a gamma distribution.  相似文献   

20.
Variable screening for censored survival data is most challenging when both survival and censoring times are correlated with an ultrahigh-dimensional vector of covariates. Existing approaches to handling censoring often make use of inverse probability weighting by assuming independent censoring with both survival time and covariates. This is a convenient but rather restrictive assumption which may be unmet in real applications, especially when the censoring mechanism is complex and the number of covariates is large. To accommodate heterogeneous (covariate-dependent) censoring that is often present in high-dimensional survival data, we propose a Gehan-type rank screening method to select features that are relevant to the survival time. The method is invariant to monotone transformations of the response and of the predictors, and works robustly for a general class of survival models. We establish the sure screening property of the proposed methodology. Simulation studies and a lymphoma data analysis demonstrate its favorable performance and practical utility.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号