首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Failure time data occur in many areas and in various censoring forms and many models have been proposed for their regression analysis such as the proportional hazards model and the proportional odds model. Another choice that has been discussed in the literature is a general class of semiparmetric transformation models, which include the two models above and many others as special cases. In this paper, we consider this class of models when one faces a general type of censored data, case K informatively interval-censored data, for which there does not seem to exist an established inference procedure. For the problem, we present a two-step estimation procedure that is quite flexible and can be easily implemented, and the consistency and asymptotic normality of the proposed estimators of regression parameters are established. In addition, an extensive simulation study is conducted and suggests that the proposed procedure works well for practical situations. An application is also provided.  相似文献   

2.
ABSTRACT

Partially varying coefficient single-index models (PVCSIM) are a class of semiparametric regression models. One important assumption is that the model error is independently and identically distributed, which may contradict with the reality in many applications. For example, in the economical and financial applications, the observations may be serially correlated over time. Based on the empirical likelihood technique, we propose a procedure for testing the serial correlation of random error in PVCSIM. Under some regular conditions, we show that the proposed empirical likelihood ratio statistic asymptotically follows a standard χ2 distribution. We also present some numerical studies to illustrate the performance of our proposed testing procedure.  相似文献   

3.
ABSTRACT

In incident cohort studies, survival data often include subjects who have had an initiate event at recruitment and may potentially experience two successive events (first and second) during the follow-up period. When disease registries or surveillance systems collect data based on incidence occurring within a specific calendar time interval, the initial event is usually subject to double truncation. Furthermore, since the second duration process is observable only if the first event has occurred, double truncation and dependent censoring arise. In this article, under the two sampling biases with an unspecified distribution of truncation variables, we propose a nonparametric estimator of the joint survival function of two successive duration times using the inverse-probability-weighted (IPW) approach. The consistency of the proposed estimator is established. Based on the estimated marginal survival functions, we also propose a two-stage estimation procedure for estimating the parameters of copula model. The bootstrap method is used to construct confidence interval. Numerical studies demonstrate that the proposed estimation approaches perform well with moderate sample sizes.  相似文献   

4.
Abstract

We consider the classification of high-dimensional data under the strongly spiked eigenvalue (SSE) model. We create a new classification procedure on the basis of the high-dimensional eigenstructure in high-dimension, low-sample-size context. We propose a distance-based classification procedure by using a data transformation. We also prove that our proposed classification procedure has consistency property for misclassification rates. We discuss performances of our classification procedure in simulations and real data analyses using microarray data sets.  相似文献   

5.
Abstract

In this article, we introduce a new distribution???free Shewhart???type control chart implementing a modified Wilcoxon-type rank sum statistic based on progressive Type-II censoring reference data. The proposed chart is also a tool for monitoring the incomplete data, because the censoring scheme applied allows the protection of experimental units at an early stage of the testing procedure. The setup of the new nonparametric control chart is presented in detail, while its operating characteristic function is studied. Explicit formulae for the evaluation of Alarm Rate and Average Run Length values for both in-control and out-of-control situations are established. A numerical study carried out depicts the performance and robustness of the proposed control chart. For illustration purposes, a practical example is also discussed.  相似文献   

6.
Abstract

It is one of the important issues in survival analysis to compare two hazard rate functions to evaluate treatment effect. It is quite common that the two hazard rate functions cross each other at one or more unknown time points, representing temporal changes of the treatment effect. In certain applications, besides survival data, we also have related longitudinal data available regarding some time-dependent covariates. In such cases, a joint model that accommodates both types of data can allow us to infer the association between the survival and longitudinal data and to assess the treatment effect better. In this paper, we propose a modelling approach for comparing two crossing hazard rate functions by joint modelling survival and longitudinal data. Maximum likelihood estimation is used in estimating the parameters of the proposed joint model using the EM algorithm. Asymptotic properties of the maximum likelihood estimators are studied. To illustrate the virtues of the proposed method, we compare the performance of the proposed method with several existing methods in a simulation study. Our proposed method is also demonstrated using a real dataset obtained from an HIV clinical trial.  相似文献   

7.
Abstract

The objective of this paper is to propose an efficient estimation procedure in a marginal mean regression model for longitudinal count data and to develop a hypothesis test for detecting the presence of overdispersion. We extend the matrix expansion idea of quadratic inference functions to the negative binomial regression framework that entails accommodating both the within-subject correlation and overdispersion issue. Theoretical and numerical results show that the proposed procedure yields a more efficient estimator asymptotically than the one ignoring either the within-subject correlation or overdispersion. When the overdispersion is absent in data, the proposed method might hinder the estimation efficiency in practice, yet the Poisson regression based regression model is fitted to the data sufficiently well. Therefore, we construct the hypothesis test that recommends an appropriate model for the analysis of the correlated count data. Extensive simulation studies indicate that the proposed test can identify the effective model consistently. The proposed procedure is also applied to a transportation safety study and recommends the proposed negative binomial regression model.  相似文献   

8.
ABSTRACT

This article presents a new test for unit roots based on least absolute deviation estimation specially designed to work for time series with autoregressive errors. The methodology used is a bootstrap scheme based on estimating a model and then the innovations. The resampling part is performed under the null hypothesis and, as it is customary in bootstrap procedures, is automatic and does not rely on the calculation of any nuisance parameter. The validity of the procedure is established and the asymptotic distribution of the statistic proposed is proved to converge to the correct distribution. To analyze the performance of the test for finite samples, a Monte Carlo study is conducted showing a very good behavior in many different situations.  相似文献   

9.
Abstract

To improve the empirical performance of the Black-Scholes model, many alternative models have been proposed to address leptokurtic feature, volatility smile, and volatility clustering effects of the asset return distributions. However, analytical tractability remains a problem for most alternative models. In this article, we study a class of hidden Markov models including Markov switching models and stochastic volatility models, that can incorporate leptokurtic feature, volatility clustering effects, as well as provide analytical solutions to option pricing. We show that these models can generate long memory phenomena when the transition probabilities depend on the time scale. We also provide an explicit analytic formula for the arbitrage-free price of the European options under these models. The issues of statistical estimation and errors in option pricing are also discussed in the Markov switching models.  相似文献   

10.

Engineers who conduct reliability tests need to choose the sample size when designing a test plan. The model parameters and quantiles are the typical quantities of interest. The large-sample procedure relies on the property that the distribution of the t -like quantities is close to the standard normal in large samples. In this paper, we use a new procedure based on both simulation and asymptotic theory to determine the sample size for a test plan. Unlike the complete data case, the t -like quantities are not pivotal quantities in general when data are time censored. However we show that the distribution of the t -like quantities only depend on the expected proportion failing and obtain the distributions by simulation for both complete and time censoring case when data follow Weibull distribution. We find that the large-sample procedure usually underestimates the sample size even when it is said to be 200 or more. The sample size given by the proposed procedure insures the requested nominal accuracy and confidence of the estimation when the test plan results in complete or time censored data. Some useful figures displaying the required sample size for the new procedure are also presented.  相似文献   

11.
ABSTRACT

This paper proposes a power-transformed linear quantile regression model for the residual lifetime of competing risks data. The proposed model can describe the association between any quantile of a time-to-event distribution among survivors beyond a specific time point and the covariates. Under covariate-dependent censoring, we develop an estimation procedure with two steps, including an unbiased monotone estimating equation for regression parameters and cumulative sum processes for the Box–Cox transformation parameter. The asymptotic properties of the estimators are also derived. We employ an efficient bootstrap method for the estimation of the variance–covariance matrix. The finite-sample performance of the proposed approaches are evaluated through simulation studies and a real example.  相似文献   

12.
Abstract

The class of transmuted distributions has received a lot of attention in the recent statistical literature. In this paper, we propose a rich family of bivariate distribution whose conditionals are transmuted distributions. The new family of distributions depends on the two baseline distributions and three dependence parameters. Apart from the general properties, we also study the distribution of the concomitance of order statistics. We study specific bivariate models. Estimation methodologies are proposed. A simulation study is conducted. The usefulness of this family is established by fitting well analyzed real life time data.  相似文献   

13.
Asymmetric behaviour in both mean and variance is often observed in real time series. The approach we adopt is based on double threshold autoregressive conditionally heteroscedastic (DTARCH) model with normal innovations. This model allows threshold nonlinearity in mean and volatility to be modelled as a result of the impact of lagged changes in assets and squared shocks, respectively. A methodology for building DTARCH models is proposed based on genetic algorithms (GAs). The most important structural parameters, that is regimes and thresholds, are searched for by GAs, while the remaining structural parameters, that is the delay parameters and models orders, vary in some pre-specified intervals and are determined using exhaustive search and an Asymptotic Information Criterion (AIC) like criterion. For each structural parameters trial set, a DTARCH model is fitted that maximizes the (penalized) likelihood (AIC criterion). For this purpose the iteratively weighted least squares algorithm is used. Then the best model according to the AIC criterion is chosen. Extension to the double threshold generalized ARCH (DTGARCH) model is also considered. The proposed methodology is checked using both simulated and market index data. Our findings show that our GAs-based procedure yields results that comparable to that reported in the literature and concerned with real time series. As far as artificial time series are considered, the proposed procedure seems to be able to fit the data quite well. In particular, a comparison is performed between the present procedure and the method proposed by Tsay [Tsay, R.S., 1989, Testing and modeling threshold autoregressive processes. Journal of the American Statistical Association, Theory and Methods, 84, 231–240.] for estimating the delay parameter. The former almost always yields better results than the latter. However, adopting Tsay's procedure as a preliminary stage for finding the appropriate delay parameter may save computational time specially if the delay parameter may vary in a large interval.  相似文献   

14.
Abstract

Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve and surface are useful tools to assess the ability of diagnostic tests to discriminate between ordered classes or groups. To define these diagnostic tests, selecting the optimal thresholds that maximize the accuracy of these tests is required. One procedure that is commonly used to find the optimal thresholds is by maximizing what is known as Youden’s index. This article presents nonparametric predictive inference (NPI) for selecting the optimal thresholds of a diagnostic test. NPI is a frequentist statistical method that is explicitly aimed at using few modeling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. Based on multiple future observations, the NPI approach is presented for selecting the optimal thresholds for two-group and three-group scenarios. In addition, a pairwise approach has also been presented for the three-group scenario. The article ends with an example to illustrate the proposed methods and a simulation study of the predictive performance of the proposed methods along with some classical methods such as Youden index. The NPI-based methods show some interesting results that overcome some of the issues concerning the predictive performance of Youden’s index.  相似文献   

15.
ABSTRACT

For monitoring systemic risk from regulators’ point of view, this article proposes a relative risk measure, which is sensitive to the market comovement. The asymptotic normality of a nonparametric estimator and its smoothed version is established when the observations are independent. To effectively construct an interval without complicated asymptotic variance estimation, a jackknife empirical likelihood inference procedure based on the smoothed nonparametric estimation is provided with a Wilks type of result in case of independent observations. When data follow from AR-GARCH models, the relative risk measure with respect to the errors becomes useful and so we propose a corresponding nonparametric estimator. A simulation study and real-life data analysis show that the proposed relative risk measure is useful in monitoring systemic risk.  相似文献   

16.
Abstract

Recurrent event data are frequently encountered in longitudinal studies. In many applications, the times between successive recurrent events (gap times) are often of interest and lead to problems that have received much attention recently. In this article, using the approach of inverse probability-of-censoring weights (IPCW), we propose nonparametric estimators for the estimation of the bivariate distribution and survival functions for gap times of recurrent event data. We also consider the estimation of Kendall’s tau for two gap times by expressing it as an integral functional of the bivariate survival function. The asymptotic properties of the proposed estimators are established. Simulation studies are conducted to investigate their finite sample performance.  相似文献   

17.
ABSTRACT

One main challenge for statistical prediction with data from multiple sources is that not all the associated covariate data are available for many sampled subjects. Consequently, we need new statistical methodology to handle this type of “fragmentary data” that has become more and more popular in recent years. In this article, we propose a novel method based on the frequentist model averaging that fits some candidate models using all available covariate data. The weights in model averaging are selected by delete-one cross-validation based on the data from complete cases. The optimality of the selected weights is rigorously proved under some conditions. The finite sample performance of the proposed method is confirmed by simulation studies. An example for personal income prediction based on real data from a leading e-community of wealth management in China is also presented for illustration.  相似文献   

18.
ABSTRACT

Genetic data are frequently categorical and have complex dependence structures that are not always well understood. For this reason, clustering and classification based on genetic data, while highly relevant, are challenging statistical problems. Here we consider a versatile U-statistics-based approach for non-parametric clustering that allows for an unconventional way of solving these problems. In this paper we propose a statistical test to assess group homogeneity taking into account multiple testing issues and a clustering algorithm based on dissimilarities within and between groups that highly speeds up the homogeneity test. We also propose a test to verify classification significance of a sample in one of two groups. We present Monte Carlo simulations that evaluate size and power of the proposed tests under different scenarios. Finally, the methodology is applied to three different genetic data sets: global human genetic diversity, breast tumour gene expression and Dengue virus serotypes. These applications showcase this statistical framework's ability to answer diverse biological questions in the high dimension low sample size scenario while adapting to the specificities of the different datatypes.  相似文献   

19.
Under the case-cohort design introduced by Prentice (Biometrica 73:1–11, 1986), the covariate histories are ascertained only for the subjects who experience the event of interest (i.e., the cases) during the follow-up period and for a relatively small random sample from the original cohort (i.e., the subcohort). The case-cohort design has been widely used in clinical and epidemiological studies to assess the effects of covariates on failure times. Most statistical methods developed for the case-cohort design use the proportional hazards model, and few methods allow for time-varying regression coefficients. In addition, most methods disregard data from subjects outside of the subcohort, which can result in inefficient inference. Addressing these issues, this paper proposes an estimation procedure for the semiparametric additive hazards model with case-cohort/two-phase sampling data, allowing the covariates of interest to be missing for cases as well as for non-cases. A more flexible form of the additive model is considered that allows the effects of some covariates to be time varying while specifying the effects of others to be constant. An augmented inverse probability weighted estimation procedure is proposed. The proposed method allows utilizing the auxiliary information that correlates with the phase-two covariates to improve efficiency. The asymptotic properties of the proposed estimators are established. An extensive simulation study shows that the augmented inverse probability weighted estimation is more efficient than the widely adopted inverse probability weighted complete-case estimation method. The method is applied to analyze data from a preventive HIV vaccine efficacy trial.  相似文献   

20.
In this paper we propose two new classes of asymptotically distribution-free Renyi-type tests for testing the equality of two risks in a competing risk model with possible censoring. This work extends the work of Aly, Kochar and McKeague [1994, Journal of American Statistical Association, 89, 994–999] and many of the existing tests for this problem belong to these newly proposed classes. The asymptotic properties of the proposed tests are investigated. Simulation studies are done to compare the performance with existing tests. A competing risks data set is analyzed to demonstrate the usefulness of the procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号