首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Of the 324 petroleum refineries operating in the U.S. in 1982, only 149 were still in the hands of their original owners in 2007. Using duration analysis, this paper explores why refineries change ownership or shut down. Plants are more likely to ‘survive’ with their original owners if they are older or larger, but less likely if the owner is a major integrated firm, or the refinery is a more technologically complex one. This latter result differs from existing research on the issue. This paper also presents a split population model to relax the general assumption of the duration model that all refiners will eventually close down; the empirical results show that the split population model converges on a standard hazard model; the log-logistic version fits best. Finally, a multinomial logit model is estimated to analyze the factors that influence the refinery plant's choices of staying open, closing, or changing ownership. Plant size, age and technology usage have positive impacts on the likelihood that a refinery will stay open, or change ownership (rather than close down).  相似文献   

2.
We propose a novel semiparametric version of the widely used proportional hazards survival model. Features include an arbitrarily rich class of continuous base-line hazards, an attractive epidemiological interpretation of the hazard as a latent competing risk model and trivial handling of censoring. Models are fitted by using a data augmentation scheme. The methodology is applied to a data set recording times to first hospitalization following clinical diagnosis of acquired immune deficiency syndrome for a sample of 169 patients.  相似文献   

3.
In incident cohort studies, survival data often include subjects who have had an initiate event at recruitment and may potentially experience two successive events (first and second) during the follow-up period. Since the second duration process becomes observable only if the first event has occurred, left truncation and dependent censoring arise if the two duration times are correlated. To confront the two potential sampling biases, we propose two inverse-probability-weighted (IPW) estimators for the estimation of the joint survival function of two successive duration times. One of them is similar to the estimator proposed by Chang and Tzeng [Nonparametric estimation of sojourn time distributions for truncated serial event data – a weight adjusted approach, Lifetime Data Anal. 12 (2006), pp. 53–67]. The other is the extension of the nonparametric estimator proposed by Wang and Wells [Nonparametric estimation of successive duration times under dependent censoring, Biometrika 85 (1998), pp. 561–572]. The weak convergence of both estimators are established. Furthermore, the delete-one jackknife and simple bootstrap methods are used to estimate standard deviations and construct interval estimators. A simulation study is conducted to compare the two IPW approaches.  相似文献   

4.
In an observational study in which each treated subject is matched to several untreated controls by using observed pretreatment covariates, a sensitivity analysis asks how hidden biases due to unobserved covariates might alter the conclusions. The bounds required for a sensitivity analysis are the solution to an optimization problem. In general, this optimization problem is not separable, in the sense that one cannot find the needed optimum by performing a separate optimization in each matched set and combining the results. We show, however, that this optimization problem is asymptotically separable, so that when there are many matched sets a separate optimization may be performed in each matched set and the results combined to yield the correct optimum with negligible error. This is true when the Wilcoxon rank sum test or the Hodges-Lehmann aligned rank test is applied in matching with multiple controls. Numerical calculations show that the asymptotic approximation performs well with as few as 10 matched sets. In the case of the rank sum test, a table is given containing the separable solution. With this table, only simple arithmetic is required to conduct the sensitivity analysis. The method also supplies estimates, such as the Hodges-Lehmann estimate, and confidence intervals associated with rank tests. The method is illustrated in a study of dropping out of US high schools and the effects on cognitive test scores.  相似文献   

5.
Nonparametric estimation of current status data with dependent censoring   总被引:1,自引:0,他引:1  
This paper discusses nonparametric estimation of a survival function when one observes only current status data (McKeown and Jewell, Lifetime Data Anal 16:215-230, 2010; Sun, The statistical analysis of interval-censored failure time data, 2006; Sun and Sun, Can J Stat 33:85-96, 2005). In this case, each subject is observed only once and the failure time of interest is observed to be either smaller or larger than the observation or censoring time. If the failure time and the observation time can be assumed to be independent, several methods have been developed for the problem. Here we will focus on the situation where the independent assumption does not hold and propose two simple estimation procedures under the copula model framework. The proposed estimates allow one to perform sensitivity analysis or identify the shape of a survival function among other uses. A simulation study performed indicates that the two methods work well and they are applied to a motivating example from a tumorigenicity study.  相似文献   

6.
Survival times for the Acacia mangium plantation in the Segaliud Lokan Project, Sabah, East Malaysia were analysed based on 20 permanent sample plots (PSPs) established in 1988 as a spacing experiment. The PSPs were established following a complete randomized block design with five levels of spacing randomly assigned to units within four blocks at different sites. The survival times of trees in years are of interest. Since the inventories were only conducted annually, the actual survival time for each tree was not observed. Hence, the data set comprises censored survival times. Initial analysis of the survival of the Acacia mangium plantation suggested there is block by spacing interaction; a Weibull model gives a reasonable fit to the replicate survival times within each PSP; but a standard Weibull regression model is inappropriate because the shape parameter differs between PSPs. In this paper we investigate the form of the non-constant Weibull shape parameter. Parsimonious models for the Weibull survival times have been derived using maximum likelihood methods. The factor selection for the parameters is based on a backward elimination procedure. The models are compared using likelihood ratio statistics. The results suggest that both Weibull parameters depend on spacing and block.  相似文献   

7.
To analyse the risk factors of coronary heart disease (CHD), we apply the Bayesian model averaging approach that formalizes the model selection process and deals with model uncertainty in a discrete-time survival model to the data from the Framingham Heart Study. We also use the Alternating Conditional Expectation algorithm to transform the risk factors, such that their relationships with CHD are best described, overcoming the problem of coding such variables subjectively. For the Framingham Study, the Bayesian model averaging approach, which makes inferences about the effects of covariates on CHD based on an average of the posterior distributions of the set of identified models, outperforms the stepwise method in predictive performance. We also show that age, cholesterol, and smoking are nonlinearly associated with the occurrence of CHD and that P-values from models selected from stepwise methods tend to overestimate the evidence for the predictive value of a risk factor and ignore model uncertainty.  相似文献   

8.
Abstract. It is quite common in epidemiology that we wish to assess the quality of estimators on a particular set of information, whereas the estimators may use a larger set of information. Two examples are studied: the first occurs when we construct a model for an event which happens if a continuous variable is above a certain threshold. We can compare estimators based on the observation of only the event or on the whole continuous variable. The other example is that of predicting the survival based only on survival information or using in addition information on a disease. We develop modified Akaike information criterion (AIC) and Likelihood cross‐validation (LCV) criteria to compare estimators in this non‐standard situation. We show that a normalized difference of AIC has a bias equal to o ( n ? 1 ) if the estimators are based on well‐specified models; a normalized difference of LCV always has a bias equal to o ( n ? 1 ). A simulation study shows that both criteria work well, although the normalized difference of LCV tends to be better and is more robust. Moreover in the case of well‐specified models the difference of risks boils down to the difference of statistical risks which can be rather precisely estimated. For ‘compatible’ models the difference of risks is often the main term but there can also be a difference of mis‐specification risks.  相似文献   

9.
Summary.  The paper proposes an alternative approach to studying the effect of premarital cohabitation on subsequent duration of marriage on the basis of a strong ignorability assumption . The approach is called propensity score matching and consists of computing survival functions conditional on a function of observed variables (the propensity score), thus eliminating any selection that is derived from these variables. In this way, it is possible to identify a time varying effect of cohabitation without making any assumption either regarding its shape or the functional form of covariate effects. The output of the matching method is the difference between the survival functions of treated and untreated individuals at each time point. Results show that the cohabitation effect on duration of marriage is indeed time varying, being close to zero for the first 2–3 years and rising considerably in the following years.  相似文献   

10.
The problem of nonparametric estimation of the spectral density function of a partially observed homogeneous random field is addressed. In particular, a class of estimators with favorable asymptotic performance (bias, variance, rate of convergence) is proposed. The proposed estimators are actually shown to be √N-consistent if the autocovariance function of the random field is supported on a compact set, and close to √N-consistent if the autocovariance function decays to zero sufficiently fast for increasing lags.  相似文献   

11.
ABSTRACT

In incident cohort studies, survival data often include subjects who have had an initiate event at recruitment and may potentially experience two successive events (first and second) during the follow-up period. When disease registries or surveillance systems collect data based on incidence occurring within a specific calendar time interval, the initial event is usually subject to double truncation. Furthermore, since the second duration process is observable only if the first event has occurred, double truncation and dependent censoring arise. In this article, under the two sampling biases with an unspecified distribution of truncation variables, we propose a nonparametric estimator of the joint survival function of two successive duration times using the inverse-probability-weighted (IPW) approach. The consistency of the proposed estimator is established. Based on the estimated marginal survival functions, we also propose a two-stage estimation procedure for estimating the parameters of copula model. The bootstrap method is used to construct confidence interval. Numerical studies demonstrate that the proposed estimation approaches perform well with moderate sample sizes.  相似文献   

12.
Designs for early phase dose finding clinical trials typically are either phase I based on toxicity, or phase I-II based on toxicity and efficacy. These designs rely on the implicit assumption that the dose of an experimental agent chosen using these short-term outcomes will maximize the agent's long-term therapeutic success rate. In many clinical settings, this assumption is not true. A dose selected in an early phase oncology trial may give suboptimal progression-free survival or overall survival time, often due to a high rate of relapse following response. To address this problem, a new family of Bayesian generalized phase I-II designs is proposed. First, a conventional phase I-II design based on short-term outcomes is used to identify a set of candidate doses, rather than selecting one dose. Additional patients then are randomized among the candidates, patients are followed for a predefined longer time period, and a final dose is selected to maximize the long-term therapeutic success rate, defined in terms of duration of response. Dose-specific sample sizes in the randomization are determined adaptively to obtain a desired level of selection reliability. The design was motivated by a phase I-II trial to find an optimal dose of natural killer cells as targeted immunotherapy for recurrent or treatment-resistant B-cell hematologic malignancies. A simulation study shows that, under a range of scenarios in the context of this trial, the proposed design has much better performance than two conventional phase I-II designs.  相似文献   

13.

For large cohort studies with rare outcomes, the nested case-control design only requires data collection of small subsets of the individuals at risk. These are typically randomly sampled at the observed event times and a weighted, stratified analysis takes over the role of the full cohort analysis. Motivated by observational studies on the impact of hospital-acquired infection on hospital stay outcome, we are interested in situations, where not necessarily the outcome is rare, but time-dependent exposure such as the occurrence of an adverse event or disease progression is. Using the counting process formulation of general nested case-control designs, we propose three sampling schemes where not all commonly observed outcomes need to be included in the analysis. Rather, inclusion probabilities may be time-dependent and may even depend on the past sampling and exposure history. A bootstrap analysis of a full cohort data set from hospital epidemiology allows us to investigate the practical utility of the proposed sampling schemes in comparison to a full cohort analysis and a too simple application of the nested case-control design, if the outcome is not rare.

  相似文献   

14.
It is shown that a bivariate survival function is both New Better than Used in Expectation (NBUE) and New Worse than Used in Expectation (NWUE) if and only if it is a bivariate Gumbel distribution. Statistical procedures are then presented to test whether that, within the class of bi-variate NBUE survival functions, a survival function is a Gumbel's bivariate exponential.  相似文献   

15.
Prostate cancer is the most common cancer diagnosed in American men and the second leading cause of death from malignancies. There are large geographical variation and racial disparities existing in the survival rate of prostate cancer. Much work on the spatial survival model is based on the proportional hazards model, but few focused on the accelerated failure time model. In this paper, we investigate the prostate cancer data of Louisiana from the SEER program and the violation of the proportional hazards assumption suggests the spatial survival model based on the accelerated failure time model is more appropriate for this data set. To account for the possible extra-variation, we consider spatially-referenced independent or dependent spatial structures. The deviance information criterion (DIC) is used to select a best fitting model within the Bayesian frame work. The results from our study indicate that age, race, stage and geographical distribution are significant in evaluating prostate cancer survival.  相似文献   

16.
Feedforward neural networks are often used in a similar manner as logistic regression models; that is, to estimate the probability of the occurrence of an event. In this paper, a probabilistic model is developed for the purpose of estimating the probability that a patient who has been admitted to the hospital with a medical back diagnosis will be released after only a short stay or will remain hospitalized for a longer period of time. As the purpose of the analysis is to determine if hospital characteristics influence the decision to retain a patient, the inputs to this model are a set of demographic variables that describe the various hospitals. The output is the probability of either a short or long term hospital stay. In order to compare the ability of each method to model the data, a hypothesis test is performed to test for an improvement resulting from the use of the neural network model.  相似文献   

17.
In incident cohort studies, survival data often include subjects who have experienced an initiate event but have not experienced a subsequent event at the calendar time of recruitment. During the follow-up periods, subjects may undergo a series of successive events. Since the second/third duration process becomes observable only if the first/second event has occurred, the data are subject to left-truncation and dependent censoring. In this article, using the inverse-probability-weighted (IPW) approach, we propose nonparametric estimators for the estimation of the joint survival function of three successive duration times. The asymptotic properties of the proposed estimators are established. The simple bootstrap methods are used to estimate standard deviations and construct interval estimators. A simulation study is conducted to investigate the finite sample properties of the proposed estimators.  相似文献   

18.
Variable selection for nonlinear regression is a complex problem, made even more difficult when there are a large number of potential covariates and a limited number of datapoints. We propose herein a multi-stage method that combines state-of-the-art techniques at each stage to best discover the relevant variables. At the first stage, an extension of the Bayesian Additive Regression tree is adopted to reduce the total number of variables to around 30. At the second stage, sensitivity analysis in the treed Gaussian process is adopted to further reduce the total number of variables. Two stopping rules are designed and sequential design is adopted to make best use of previous information. We demonstrate our approach on two simulated examples and one real data set.  相似文献   

19.
This paper compares minimum distance estimation with best linear unbiased estimation to determine which technique provides the most accurate estimates for location and scale parameters as applied to the three parameter Pareto distribution. Two minimum distance estimators are developed for each of the three distance measures used (Kolmogorov, Cramer‐von Mises, and Anderson‐Darling) resulting in six new estimators. For a given sample size 6 or 18 and shape parameter 1(1)4, the location and scale parameters are estimated. A Monte Carlo technique is used to generate the sample sets. The best linear unbiased estimator and the six minimum distance estimators provide parameter estimates based on each sample set. These estimates are compared using mean square error as the evaluation tool. Results show that the best linear unbaised estimator provided more accurate estimates of location and scale than did the minimum estimators tested.  相似文献   

20.
A new solution is proposed for a sparse data problem arising in nonparametric estimation of a bivariate survival function. Prior information, if available, can be used to obtain initial values for the EM algorithm. Initial values will completely determine estimates of portions of the distribution which are not identifiable from the data, while having a minimal effect on estimates of portions of the distribution for which the data provide sufficient information. Methods are applied to the distribution of women's age at first marriage and age at birth of first child, using data from the Current Population Surveys of 1975 and 1986.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号