首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
ABSTRACT

We present here an extension of Pan's multiple imputation approach to Cox regression in the setting of interval-censored competing risks data. The idea is to convert interval-censored data into multiple sets of complete or right-censored data and to use partial likelihood methods to analyse them. The process is iterated, and at each step, the coefficient of interest, its variance–covariance matrix, and the baseline cumulative incidence function are updated from multiple posterior estimates derived from the Fine and Gray sub-distribution hazards regression given augmented data. Through simulation of patients at risks of failure from two causes, and following a prescheduled programme allowing for informative interval-censoring mechanisms, we show that the proposed method results in more accurate coefficient estimates as compared to the simple imputation approach. We have implemented the method in the MIICD R package, available on the CRAN website.  相似文献   

2.
Before releasing survey data, statistical agencies usually perturb the original data to keep each survey unit''s information confidential. One significant concern in releasing survey microdata is identity disclosure, which occurs when an intruder correctly identifies the records of a survey unit by matching the values of some key (or pseudo-identifying) variables. We examine a recently developed post-randomization method for a strict control of identification risks in releasing survey microdata. While that procedure well preserves the observed frequencies and hence statistical estimates in case of simple random sampling, we show that in general surveys, it may induce considerable bias in commonly used survey-weighted estimators. We propose a modified procedure that better preserves weighted estimates. The procedure is illustrated and empirically assessed with an application to a publicly available US Census Bureau data set.  相似文献   

3.
4.
Frequently in process monitoring, situations arise in which the order that events occur cannot be distinguished, motivating the need to accommodate multiple observations occurring at the same time, or concurrent observations. The risk-adjusted Bernoulli cumulative sum (CUSUM) control chart can be used to monitor the rate of an adverse event by fitting a risk-adjustment model, followed by a likelihood ratio-based scoring method that produces a statistic that can be monitored. In our paper, we develop a risk-adjusted Bernoulli CUSUM control chart for concurrent observations. Furthermore, we adopt a novel approach that uses a combined mixture model and kernel density estimation approach in order to perform risk-adjustment with regard to spatial location. Our proposed method allows for monitoring binary outcomes through time with multiple observations at each time point, where the chart is spatially adjusted for each Bernoulli observation's estimated probability of the adverse event. A simulation study is presented to assess the performance of the proposed monitoring scheme. We apply our method using data from Wayne County, Michigan between 2005 and 2014 to monitor the rate of foreclosure as a percentage of all housing transactions.  相似文献   

5.
Nonparametric methods, Theil's method and Hussain's method have been applied to simple linear regression problems for estimating the slope of the regression line.We extend these methods and propose a robust estimator to estimate the coefficient of a first order autoregressive process under various distribution shapes, A simulation study to compare Theil's estimator, Hus-sain's estimator, the least squares estimator, and the proposed estimator is also presented.  相似文献   

6.
This article suggests an alternative to the ratio estimator for estimating the total size of a subdomain of a population. The application that served as the genesis for this work is from auditing. The problem is to estimate the total of sales transactions that are not tax exempt from an audit sample of the population of nontaxed sales transactions. A superpopulation approach, which models the unit's probability of belonging to the subdomain as a function of its size, leads to a family of estimators. The simplest member of this famiiy is one in which that function is specified to be a constant. The optimal estimator for this model performs markedly better than the ratio estimator when the assumption is true and often performs better when it is not, though in that case it is biased. Stratification is shown to reduce this bias and at the same time make the ratio estimator more similar to the optimal estimator. A simulation experiment shows that the theoretical advantages hold in a real audit population.  相似文献   

7.
In this article we present a technique for implementing large-scale optimal portfolio selection. We use high-frequency daily data to capture valuable statistical information in asset returns. We describe several statistical issues involved in quantitative approaches to portfolio selection. Our methodology applies to large-scale portfolio-selection problems in which the number of possible holdings is large relative to the estimation period provided by historical data. We illustrate our approach on an equity database that consists of stocks from the Standard and Poor's index, and we compare our portfolios to this benchmark index. Our methodology differs from the usual quadratic programming approach to portfolio selection in three ways: (1) We employ informative priors on the expected returns and variance-covariance matrices, (2) we use daily data for estimation purposes, with upper and lower holding limits for individual securities, and (3) we use a dynamic asset-allocation approach that is based on reestimating and then rebalancing the portfolio weights on a prespecified time window. The key inputs to the optimization process are the predictive distributions of expected returns and the predictive variance-covariance matrix. We describe the statistical issues involved in modeling these inputs for high-dimensional portfolio problems in which our data frequency is daily. In our application, we find that our optimal portfolio outperforms the underlying benchmark.  相似文献   

8.
This article estimates the speed of the adjustment coefficient in structural error-correction models. We use a system method for real exchange rates of traded and nontraded goods by combining a single-equation method with Hansen and Sargent's instrumental variables methods for linear rational expectations models. We apply these methods to a modified version of Mussa's model. Our results show that the half-lives of purchasing power parity deviations for the rates of traded goods are less than 1 year and are shorter than those for general price and for nontraded goods in most cases, implying a faster adjustment speed to parity.  相似文献   

9.
We propose a new integer-valued time series process, called generalized pth-order random coefficient integer-valued autoregressive process with signed thinning operator. This kind of process is appropriate for modeling negative integer-valued time series; strict stationarity and ergodicity of the process are established. Estimators of the model's parameters are derived and their properties are studied via simulation. We apply our process to a real data example.  相似文献   

10.
Recently a limit theorem has been obtained for the limiting-stationary distribution of a process in which individuals reproduce as in a subcritical Galton-Watson process and are subject to an independent immigration component at each generation. This paper provides a different proof of this theorem, and under slightly weaker conditions. A similar approach is used to obtain a limit form of Taglom's theorem for the ordinary subcritical Galton-Watson process.  相似文献   

11.
We introduce a new two-sample inference procedure to assess the relative performance of two groups over time. Our model-free method does not assume proportional hazards, making it suitable for scenarios where nonproportional hazards may exist. Our procedure includes a diagnostic tau plot to identify changes in hazard timing and a formal inference procedure. The tau-based measures we develop are clinically meaningful and provide interpretable estimands to summarize the treatment effect over time. Our proposed statistic is a U-statistic and exhibits a martingale structure, allowing us to construct confidence intervals and perform hypothesis testing. Our approach is robust with respect to the censoring distribution. We also demonstrate how our method can be applied for sensitivity analysis in scenarios with missing tail information due to insufficient follow-up. Without censoring, Kendall's tau estimator we propose reduces to the Wilcoxon-Mann–Whitney statistic. We evaluate our method using simulations to compare its performance with the restricted mean survival time and log-rank statistics. We also apply our approach to data from several published oncology clinical trials where nonproportional hazards may exist.  相似文献   

12.
We extend a basic result of Huber's on least favorable distributions to the setting of conditional inference, using an approach based on the notion of log-Gâteaux differentiation and perturbed models. Whereas Huber considered intervals of fixed width for location parameters and their average coverage rates, we study error models having longest confidence intervals, conditional on the location configuration of the sample. Our version of the problem does not have a global solution, but one that changes from configuration to configuration. Asymptotically, the conditionally least-informative shape minimizes the conditional Fisher information. We characterize the asymptotic solution within Huber's contamination model.  相似文献   

13.
We describe inferactive data analysis, so-named to denote an interactive approach to data analysis with an emphasis on inference after data analysis. Our approach is a compromise between Tukey's exploratory and confirmatory data analysis allowing also for Bayesian data analysis. We see this as a useful step in concrete providing tools (with statistical guarantees) for current data scientists. The basis of inference we use is (a conditional approach to) selective inference, in particular its randomized form. The relevant reference distributions are constructed from what we call a DAG-DAG—a Data Analysis Generative DAG, and a selective change of variables formula is crucial to any practical implementation of inferactive data analysis via sampling these distributions. We discuss a canonical example of an incomplete cross-validation test statistic to discriminate between black box models, and a real HIV dataset example to illustrate inference after making multiple queries on data.  相似文献   

14.
We propose a Bayesian approach for estimating the hazard functions under the constraint of a monotone hazard ratio. We construct a model for the monotone hazard ratio utilizing the Cox’s proportional hazards model with a monotone time-dependent coefficient. To reduce computational complexity, we use a signed gamma process prior for the time-dependent coefficient and the Bayesian bootstrap prior for the baseline hazard function. We develope an efficient MCMC algorithm and illustrate the proposed method on simulated and real data sets.  相似文献   

15.
Multiple Imputation (MI) is an established approach for handling missing values. We show that MI for continuous data under the multivariate normal assumption is susceptible to generating implausible values. Our proposed remedy, is to: (1) transform the observed data into quantiles of the standard normal distribution; (2) obtain a functional relationship between the observed data and it's corresponding standard normal quantiles; (3) undertake MI using the quantiles produced in step 1; and finally, (4) use the functional relationship to transform the imputations into their original domain. In conclusion, our approach safeguards MI from imputing implausible values.  相似文献   

16.
We propose a flexible nonparametric estimation of a variance function from a one-dimensional process where the process errors are nonstationary and correlated. Due to nonstationarity a local variogram is defined, and its asymptotic properties are derived. We include a bandwidth selection method for smoothing taking into account the correlations in the errors. We compare the proposed difference-based nonparametric approach with Anderes and Stein(2011)’s local-likelihood approach. Our method has a smaller integrated MSE, easily fixes the boundary bias, and requires far less computing time than the likelihood-based method.  相似文献   

17.
We investigate how to combine marginal assessments about the values that random variables assume separately into a model for the values that they assume jointly, when (i) these marginal assessments are modelled by means of coherent lower previsions and (ii) we have the additional assumption that the random variables are forward epistemically irrelevant to each other. We consider and provide arguments for two possible combinations, namely the forward irrelevant natural extension and the forward irrelevant product, and we study the relationships between them. Our treatment also uncovers an interesting connection between the behavioural theory of coherent lower previsions, and Shafer and Vovk's game-theoretic approach to probability theory.  相似文献   

18.
We describe a model to obtain strengths and rankings of players appearing in golf's Ryder Cup. Obtaining rankings is complicated because of two reasons. First, competitors do not compete on an equal number of occasions, with some competitors appearing too infrequently for their ranking to be estimated with any degree of certainty, and second, different competitors experience different levels of volatility in results. Our approach is to assume the competitor strengths are drawn from some common distribution. For small numbers of competitors, as is the case here, we fit the model using Monte-Carlo integration. Results suggest there is very little difference between the top performing players, though Scotland's Colin Montgomerie is estimated as the strongest Ryder Cup player.  相似文献   

19.
In this paper, two measures of agreement among several sets of ranks, Kendall's concordance coefficient and top-down concordance coefficient, are reviewed. In order to illustrate the utility of these measures, two examples, in the fields of health and sports, are presented. A Monte Carlo simulation study was carried out to compare the performance of Kendall's and top-down concordance coefficients in detecting several types and magnitudes of agreements. The data generation scheme was developed in order to induce an agreement with different intensities among m (m>2) sets of ranks in non-directional and directional rank agreement scenarios. The performance of each coefficient was estimated by the proportion of rejected null hypotheses, assessed at 5% significance level, when testing whether the underlying population concordance coefficient is sufficiently greater than zero. For the directional rank agreement scenario, the top-down concordance coefficient allowed to achieve a percentage of significant concordances that was higher than the one achieved by Kendall's concordance coefficient. Mainly, when the degree of agreement was small, the results of the simulation study pointed to the advantage of using a weighted rank concordance, namely the top-down concordance coefficient, simultaneously with Kendall's concordance coefficient, enabling the detection of agreement (in a top-down sense) in situations not detected by Kendall's concordance coefficient.  相似文献   

20.
Evaluation of system reliability for complex systems based on Taylor's approximation becomes increasingly intractable. Taguchi's concept of random experimentation has been exploited by English et al (1996) for discretization of complex systems and determination of reliability values. We indicate a few demerits of discretization method and propose to retain the continuous character of the original problem by evaluating system reliability using a range approximation method. Our proposed method works better than discretization approach in all the three engineering problems considered for the purpose of demonstration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号