共查询到20条相似文献,搜索用时 46 毫秒
1.
Heng-Hui Lue 《统计学通讯:理论与方法》2013,42(20):3276-3286
We consider a nonlinear censored regression problem with a vector of predictors. With censoring, high-dimensional regression analysis becomes much more complicated. Since censoring can cause severe bias in estimation, modification to adjust such bias is needed to be made. Based on the weight adjustment, we develop the modification of sliced average variance estimation for estimating the lifetime central subspace without requiring a prespecified parametric model. Our proposed method preserves as much regression information as possible. Simulation results are reported and comparisons are made with the sliced inverse regression of Li et al. (1999). 相似文献
2.
In this article we propose a modification of the recently introduced divergence information criterion (DIC, Mattheou et al., 2009) for the determination of the order of an autoregressive process and show that it is an asymptotically unbiased estimator of the expected overall discrepancy, a nonnegative quantity that measures the distance between the true unknown model and a fitted approximating model. Further, we use Monte Carlo methods and various data generating processes for small, medium, and large sample sizes in order to explore the capabilities of the new criterion in selecting the optimal order in autoregressive processes and in general in a time series context. The new criterion shows remarkably good results by choosing the correct model more frequently than traditional information criteria. 相似文献
3.
R. Hasan Abadi 《统计学通讯:模拟与计算》2013,42(8):1430-1443
Censored data arise naturally in a number of fields, particularly in problems of reliability and survival analysis. There are several types of censoring, in this article, we will confine ourselves to the right randomly censoring type. Recently, Ahmadi et al. (2010) considered the problem of estimating unknown parameters in a general framework based on the right randomly censored data. They assumed that the survival function of the censoring time is free of the unknown parameter. This assumption is sometimes inappropriate. In such cases, a proportional odds (PO) model may be more appropriate (Lam and Leung, 2001). Under this model, in this article, point and interval estimations for the unknown parameters are obtained. Since it is important to check the adequacy of models upon which inferences are based (Lawless, 2003, p. 465), two new goodness-of-fit tests for PO model based on right randomly censored data are proposed. The proposed procedures are applied to two real data sets due to Smith (2002). A Monte Carlo simulation study is conducted to carry out the behavior of the estimators obtained. 相似文献
4.
Pao-sheng Shen 《统计学通讯:模拟与计算》2013,42(4):531-543
Double censoring arises when T represents an outcome variable that can only be accurately measured within a certain range, [L, U], where L and U are the left- and right-censoring variables, respectively. When L is always observed, we consider the empirical likelihood inference for linear transformation models, based on the martingale-type estimating equation proposed by Chen et al. (2002). It is demonstrated that both the approach of Lu and Liang (2006) and that of Yu et al. (2011) can be extended to doubly censored data. Simulation studies are conducted to investigate the performance of the empirical likelihood ratio methods. 相似文献
5.
AbstractIn this article, we improvise Singh and Grewal (2013) and Hussain et al. (2016) techniques by introducing a new two-stage randomization response process. Using the proposed new technique, we achieve better efficiency and increasing protection of privacy of respondents than the Kuk (1990), Singh and Grewal (2013) and Hussain et al. (2016) models. The relative efficiency and protection of the respondents of the proposed two-stage randomization device have been investigated through simulation study, and the situations are reported where the proposed estimator performs better than its competitors. The SAS code used to investigate the performance of the proposed strategy are also provided. 相似文献
6.
N. Balakrishnan 《统计学通讯:理论与方法》2013,42(5):880-906
In this article, we establish several recurrence relations for the single and product moments of progressively Type-II right censored order statistics from a log-logistic distribution. The use of these relations in a systematic recursive manner would enable the computation of all the means, variances and covariances of progressively Type-II right censored order statistics from the log-logistic distribution for all sample sizes n, effective sample sizes m, and all progressive censoring schemes (R 1,…, R m ). The results established here generalize the corresponding results for the usual order statistics due to Balakrishnan and Malik (1987) and Balakrishnan et al. (1987). The moments so determined are then utilized to derive best linear unbiased estimators for the scale- and location-scale log-logistic distributions. A comparison of these estimates with the maximum likelihood estimates is made through Monte Carlo simulation. The best linear unbiased predictors of progressively censored failure times is then discussed briefly. Finally, a numerical example is presented to illustrate all the methods of inference developed here. 相似文献
7.
Motivated by covariate-adjusted regression (CAR) proposed by Sentürk and Müller (2005) and an application problem, in this article we introduce and investigate a covariate-adjusted partially linear regression model (CAPLM), in which both response and predictor vector can only be observed after being distorted by some multiplicative factors, and an additional variable such as age or period is taken into account. Although our model seems to be a special case of covariate-adjusted varying coefficient model (CAVCM) given by Sentürk (2006), the data types of CAPLM and CAVCM are basically different and then the methods for inferring the two models are different. In this article, the estimate method motivated by Cui et al. (2008) is employed to infer the new model. Furthermore, under some mild conditions, the asymptotic normality of estimator for the parametric component is obtained. Combined with the consistent estimate of asymptotic covariance, we obtain confidence intervals for the regression coefficients. Also, some simulations and a real data analysis are made to illustrate the new model and methods. 相似文献
8.
9.
Difference-based estimators for the error variance are popular since they do not require the estimation of the mean function. Unlike most existing difference-based estimators, new estimators proposed by Müller et al. (2003) and Tong and Wang (2005) achieved the asymptotic optimal rate as residual-based estimators. In this article, we study the relative errors of these difference-based estimators which lead to better understanding of the differences between them and residual-based estimators. To compute the relative error of the covariate-matched U-statistic estimator proposed by Müller et al. (2003), we develop a modified version by using simpler weights. We further investigate its asymptotic property for both equidistant and random designs and show that our modified estimator is asymptotically efficient. 相似文献
10.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
11.
In this article, several methods to make inferences about the parameters of a finite mixture of distributions in the context of centrally censored data with partial identification are revised. These methods are an adaptation of the work in Contreras-Cristán, Gutiérrez-Peña, and O'Reilly (2003) in the case of right censoring. The first method focuses on an asymptotic approximation to a suitably simplified likelihood using some latent quantities; the second method is based on the expectation-maximization (EM) algorithm. Both methods make explicit use of latent variables and provide computationally efficient procedures compared to non-Bayesian methods that deal directly with the full likelihood of the mixture appealing to its asymptotic approximation. The third method, from a Bayesian perspective, uses data augmentation to work with an uncensored sample. This last method is related to a recently proposed Bayesian method in Baker, Mengersen, and Davis (2005). Our proposal of the three adapted methods is shown to provide similar inferential answers, thus offering alternative analyses. 相似文献
12.
Biao Zhang 《Econometric Reviews》2016,35(2):201-231
This paper discusses the estimation of average treatment effects in observational causal inferences. By employing a working propensity score and two working regression models for treatment and control groups, Robins et al. (1994, 1995) introduced the augmented inverse probability weighting (AIPW) method for estimation of average treatment effects, which extends the inverse probability weighting (IPW) method of Horvitz and Thompson (1952); the AIPW estimators are locally efficient and doubly robust. In this paper, we study a hybrid of the empirical likelihood method and the method of moments by employing three estimating functions, which can generate estimators for average treatment effects that are locally efficient and doubly robust. The proposed estimators of average treatment effects are efficient for the given choice of three estimating functions when the working propensity score is correctly specified, and thus are more efficient than the AIPW estimators. In addition, we consider a regression method for estimation of the average treatment effects when working regression models for both the treatment and control groups are correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. (1994, 1995). Finally, we present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. 相似文献
13.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings. 相似文献
14.
15.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
16.
Guangyu Mao 《Econometric Reviews》2018,37(5):491-506
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011)Baltagi et al. (2012, which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011)Baltagi et al. (2012). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives. 相似文献
17.
Two types of estimates of process level, namely repeated median estimates (Siegel, 1982) and full online estimates (Gather et al., 2006) based on repeated median filters, are used to develop control charts. The distributional properties of the estimates are studied using simulation and these are found to closely follow normal distribution. The repeated median being robust against outliers with asymptotically 50% breakdown value and having small standard deviation is found to be useful as a basis for monitoring process averages. The control charts using repeated median estimates have been recommended for general use. 相似文献
18.
By using the medical data analyzed by Kang et al. (2007), a Bayesian procedure is applied to obtain control limits for the coefficient of variation. Reference and probability matching priors are derived for a common coefficient of variation across the range of sample values. By simulating the posterior predictive density function of a future coefficient of variation, it is shown that the control limits are effectively identical to those obtained by Kang et al. (2007) for the specific dataset they used. This article illustrates the flexibility and unique features of the Bayesian simulation method for obtaining posterior distributions, predictive intervals, and run-lengths in the case of the coefficient of variation. A simulation study shows that the 95% Bayesian confidence intervals for the coefficient of variation have the correct frequentist coverage. 相似文献
19.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(18):3222-3237
We introduce a score test to identify longitudinal biomarkers or surrogates for a time to event outcome. This method is an extension of Henderson et al. (2000, 2002). In this article, a score test is based on a joint likelihood function which combines the likelihood functions of the longitudinal biomarkers and the survival times. Henderson et al. (2000, 2002) assumed that the same random effect exists in the longitudinal component and in the Cox model and then they can derive a score test to determine if a longitudinal biomarker is associated with time to an event. We extend this work and our score test is based on a joint likelihood function which allows other random effects to be present in the survival function. Considering heterogeneous baseline hazards in individuals, we use simulations to explore how the factors can influence the power of a score test to detect the association of a longitudinal biomarker and the survival time. These factors include the functional form of the random effects from the longitudinal biomarkers, in the different number of individuals, and time points per individual. We illustrate our method using a prothrombin index as a predictor of survival in liver cirrhosis patients. 相似文献
20.
For Canada's boreal forest region, the accurate modelling of the timing of the appearance of aspen leaves is important to forest fire management, as it signifies the end of the spring fire season that occurs after snowmelt. This article compares two methods, a midpoint rule and a conditional expectation method used to estimate the true flush date for interval-censored data from a large set of fire-weather stations in Alberta, Canada. The conditional expectation method uses the interval censored kernel density estimator of Braun et al. (2005). The methods are compared via simulation, where true flush dates were generated from a normal distribution and then converted into intervals by adding and subtracting exponential random variables. The simulation parameters were estimated from the data set and several scenarios were considered. The study reveals that the conditional expectation method is never worse than the midpoint method, and that there is a significant advantage to this method when the intervals are large. An illustration of the methodology applied to the Alberta data set is also provided. 相似文献