首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this study, we propose using Jackknife-after-Bootstrap (JaB) method to detect influential observations in binary logistic regression model. Performance of the proposed method has been compared with the traditional method for standardized Pearson residuals, Cook's distance, change in the Pearson chi-square and change in the deviance statistics by both real world examples and simulation studies. The results reveal that under the various scenarios considered in this article, JaB performs better than the traditional method and is more robust to masking effect especially for Cook's distance.  相似文献   

2.
In this study, we propose sufficient time series bootstrap methods that achieve better results than conventional non-overlapping block bootstrap, but with less computing time and lower standard errors of estimation. Also, we propose using a new technique using ordered bootstrapped blocks, to better preserve the dependency structure of the original data. The performance of the proposed methods are compared in a simulation study for MA(2) and AR(2) processes and in an example. The results show that our methods are good competitors that often exhibit improved performance over the conventional block methods.  相似文献   

3.
The present study suggests the use of the normalized Johnson transformation trimmed t statistic in the one-sample case when the assumption of normality is violated. The performance of the proposed method was evaluated by Monte Carlo simulation, and was compared with the conventional Student t statistic, the trimmed t statistic and the normalized Johnson's transformation untrimmed t statistic respectively. The simulated results indicate that the proposed method can control type I error very well and that its power is greater than the other competitors for various conditions of non-normality. The method can be easily computer programmed and provides an alternative for the conventional t test.  相似文献   

4.
The Jackknife-after-bootstrap (JaB) technique originally developed by Efron [8 B. Efron, Jackknife-after-bootstrap standard errors and influence functions, J. R. Stat. Soc. 54 (1992), pp. 83127. [Google Scholar]] has been proposed as an approach to improve the detection of influential observations in linear regression models by Martin and Roberts [12 M.A. Martin and S. Roberts, Jackknife-after-bootstrap regression influence diagnostics, J. Nonparametr. Stat. 22 (2010), pp. 257269. doi: 10.1080/10485250903287906[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]] and Beyaztas and Alin [2 U. Beyaztas and A. Alin, Jackknife-after-bootstrap method for detection of influential observations in linear regression model, Comm. Statist. Simulation Comput. 42 (2013), pp. 12561267. doi: 10.1080/03610918.2012.661908[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]]. The method is based on the use of percentile-method confidence intervals to provide improved cut-off values for several single case-deletion influence measures. In order to improve JaB, we propose using robust versions of Efron [7 B. Efron, Better bootstrap confidence intervals, J. Amer. Statist. Assoc. 82 (1987), pp. 171185. doi: 10.1080/01621459.1987.10478410[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]]’s bias-corrected and accelerated (BCa) bootstrap confidence intervals. In this study, the performances of robust BCa–JaB and conventional JaB methods are compared in the cases of DFFITS, Welsch's distance and modified Cook's distance influence diagnostics. Comparisons are based on both real data examples and through a simulation study. Our results reveal that under a variety of scenarios, our proposed method provides more accurate and reliable results, and it is more robust to masking effects.  相似文献   

5.
For the conventional type-II hybrid censoring scheme (HCS) in Childs et al., a Bayesian variable sampling plan among the class of the maximum likelihood estimators was derived by Lin et al. under the loss function, which does not include the cost of experimental time. Instead of taking the conventional type-II hybrid censoring scheme, a persuasive argument leads to taking the modified type-II hybrid censoring scheme (MHCS) if the cost of experimental time is included in the loss function. In this article, we apply the decision-theoretic approach for the concerned acceptance sampling. With the type-II MHCS, based on a sufficient statistics, the optimal Bayesian sampling plan is derived under a general loss function. Furthermore, for the conjugate prior distribution, the closed-form formula of the Bayes decision rule can be obtained under the quadratic decision loss. Numerical study is given to demonstrate the performance of the proposed Bayesian sampling plan.  相似文献   

6.
Swindel (1976) introduced a modified ridge regression estimator based on prior information. A necessary and sufficient condition is derived for Swindel's proposed estimator to have lower risk than the conventional ordinary ridge regression estimator when both estimators are computed using the same value of k.  相似文献   

7.
Structural inference as a method of statistical analysis seems to have escaped the attention of many statisticians. This paper focuses on Fraser’s necessary analysis of structural models as a tool to derive classical distribution results.

A structural model analyzed by Zacks (1971) by means of conventional statistical methods and fiducial theory is re-examined by the structural method. It is shown that results obtained by the former methods come as easy consequences of the latter analysis of the structural model. In the process we also simplify Zacks1 methods of obtaining a minimum risk equivariant estimator of a parameter of the model.

A theorem of Basu (1955), often used to prove independence of a complete sufficient statistic and an ancillary statistic, is also reexamined in the light of structural method. It is found that for structural models more can be achieved by necessary analysis without the use of Basu’s theorem. Bain’s (1972) application of Basu’s theorem of constructing confidence intervals for Weibull reliability is given as an example.  相似文献   

8.
ABSTRACT

Early detection with a low false alarm rate (FAR) is the main aim of outbreak detection as used in public health surveillance or in regard to bioterrorism. Multivariate surveillance is preferable to univariate surveillance since correlation between series (CBS) is recognized and incorporated. Sufficient reduction has proved a promising method for handling CBS, but has not previously been used when correlation within series (CWS) is present. Here we develop sufficient reduction methods for reducing a p-dimensional multivariate series to a univariate series of statistics shown to be sufficient to monitor a sudden, but persistent, shift in the multivariate series mean. Correlation both within and between series is taken into account, as public health data typically exhibit both forms of association. Simultaneous and lagged changes and different shift sizes are investigated. A one-sided exponentially weighted moving average chart is used as a tool for detection of a change. The performance of the proposed method is compared with existing sufficient reduction methods, the parallel univariate method and both VarR and Z charts. A simulation study using bivariate normal autoregressive data shows that the new method gives shorter delays and a lower FAR than other methods, which have high FARs when CWS is clearly present.  相似文献   

9.
The standard Parzen-Rosenblatt kernel density estimator is known to systematically deviate from the true value near critical points of the density curve. To overcome this difficulty, we extend the Rao-Blackwell method by using locally sufficient statistics: we define a new estimator and study its asymptotic behaviour. The interest of the method is shown by means of simulations.  相似文献   

10.
The paper establishes the analytical grounds of the uniform superiority of a variable sampling interval (VSI) Shewhart control chart over the conventional fixed sampling interval (FSI) control chart, with respect to the zero-time performance, for a wide class of process distributions. We provide a sufficient condition on the distribution of a control chart statistic, and propose a criterion to determine the control limits and the regions in the in-control area of the VSI chart, corresponding to the different sampling intervals used by it. The condition and the criterion together ensure the uniform zero-time superiority of the VSI chart over the matched FSI chart, in detecting a process shift of any magnitude. It is shown that normal, Student's t and Laplace distributions satisfy the sufficient condition. In addition, chi-square, F and beta distributions satisfy it, provided that these are not extremely skewed. Further, it is illustrated that the superiority of the VSI feature is not trivial and cannot be assured if the sufficient condition is not satisfied or the control limits and the regions are not determined according to the proposed criterion. An application of the result to confirm the superiority of the VSI feature is demonstrated for the control chart for individual observations used to monitor a milk-pouch filling process.  相似文献   

11.
Although the concept of sufficient dimension reduction that was originally proposed has been there for a long time, studies in the literature have largely focused on properties of estimators of dimension-reduction subspaces in the classical “small p, and large n” setting. Rather than the subspace, this paper considers directly the set of reduced predictors, which we believe are more relevant for subsequent analyses. A principled method is proposed for estimating a sparse reduction, which is based on a new, revised representation of an existing well-known method called the sliced inverse regression. A fast and efficient algorithm is developed for computing the estimator. The asymptotic behavior of the new method is studied when the number of predictors, p, exceeds the sample size, n, providing a guide for choosing the number of sufficient dimension-reduction predictors. Numerical results, including a simulation study and a cancer-drug-sensitivity data analysis, are presented to examine the performance.  相似文献   

12.
在投入产出分析中RAS法已被广泛应用,主要用于对预测期的投入产出消耗系数矩阵进行修订和预测。随着研究的深入,RAS法有了很多改进,但如果需要两个矩阵共同平衡预测的时候,常规的RAS法或改进的RAS法都难以完成,而对资金流量矩阵延长表的预测,正是需要同时预测两个平衡矩阵的情况。鉴于此,讨论了一种双矩阵RAS(Double Matrix RAS,DRAS)的平衡方法,对这种方法进行了数学表述,并就其在资金流量矩阵表预测中的应用给出了说明。  相似文献   

13.
This article considers first-order autoregressive panel model that is a simple model for dynamic panel data (DPD) models. The generalized method of moments (GMM) gives efficient estimators for these models. This efficiency is affected by the choice of the weighting matrix that has been used in GMM estimation. The non-optimal weighting matrices have been used in the conventional GMM estimators. This led to a loss of efficiency. Therefore, we present new GMM estimators based on optimal or suboptimal weighting matrices. Monte Carlo study indicates that the bias and efficiency of the new estimators are more reliable than the conventional estimators.  相似文献   

14.
Abstract.  A blockwise shrinkage is a popular adaptive procedure for non-parametric series estimates. It possesses an impressive range of asymptotic properties, and there is a vast pool of blocks and shrinkage procedures used. Traditionally these estimates are studied via upper bounds on their risks. This article suggests the study of these adaptive estimates via non-asymptotic lower bounds established for a spike underlying function that plays a pivotal role in the wavelet and minimax statistics. While upper-bound inequalities help the statistician to find sufficient conditions for a desirable estimation, the non-asymptotic lower bounds yield necessary conditions and shed a new light on the popular method of adaptation. The suggested method complements and knits together two traditional techniques used in the analysis of adaptive estimates: a numerical study and an asymptotic minimax inference.  相似文献   

15.
In this article, we propose to use sparse sufficient dimension reduction as a novel method for Markov blanket discovery of a target variable, where we do not take any distributional assumption on the variables. By assuming sparsity on the basis of the central subspace, we developed a penalized loss function estimate on the high-dimensional covariance matrix. A coordinate descent algorithm based on an inverse regression is used to get the sparse basis of the central subspace. Finite sample behavior of the proposed method is explored by simulation study and real data examples.  相似文献   

16.
李锐等 《统计研究》2014,31(8):52-58
本文借鉴Heckman和Raj分析框架,结合苏州工业园区金保工程管理数据,构建了养老金制度由完全积累制向部分积累制变革的反事实微观评估模型。鉴于参保职工异质性,本文采用分位数分析,用“替代率增幅分布”代替“平均替代率增幅”,评估制度变革的个体福利损益;又引入均值回归以及分位数回归,构建“替代率增幅”与“工资”等影响因素的经济结构模型,估计Raj充分信息评估指标,评估制度变革的收入再分配效应。本文在方法上整合了精算模型和经济结构模型,拓展了传统的反事实政策评估方法;在实务上为政府充分利用金保工程管理数据、辅助决策社会保障提供了一种范式。  相似文献   

17.
A general saddlepoint/Monte Carlo method to approximate (conditional) multivariate probabilities is presented. This method requires a tractable joint moment generating function (m.g.f.), but does not require a tractable distribution or density. The method is easy to program and has a third-order accuracy with respect to increasing sample size in contrast to standard asymptotic approximations which are typically only accurate to the first order.

The method is most easily described in the context of a continuous regular exponential family. Here, inferences can be formulated as probabilities with respect to the joint density of the sufficient statistics or the conditional density of some sufficient statistics given the others. Analytical expressions for these densities are not generally available, and it is often not possible to simulate exactly from the conditional distributions to obtain a direct Monte Carlo approximation of the required integral. A solution to the first of these problems is to replace the intractable density by a highly accurate saddlepoint approximation. The second problem can be addressed via importance sampling, that is, an indirect Monte Carlo approximation involving simulation from a crude approximation to the true density. Asymptotic normality of the sufficient statistics suggests an obvious candidate for an importance distribution.

The more general problem considers the computation of a joint probability for a subvector of random T, given its complementary subvector, when its distribution is intractable, but its joint m.g.f. is computable. For such settings, the distribution may be tilted, maintaining T as the sufficient statistic. Within this tilted family, the computation of such multivariate probabilities proceeds as described for the exponential family setting.  相似文献   

18.
In this study, the necessary and sufficient conditions for the Liu-type (LT) biased estimator are determined. A test for choosing between the LT estimator and least-squares estimator is obtained by using these necessary and sufficient conditions. Also, a simulation study is carried out to compare this estimator against the ridge estimator. Furthermore, a numerical example is given for defined test statistic.  相似文献   

19.
Aimed at the puzzle that it is hardly possible to solve maximum likelihood estimate (MLE) of two-dimensional lognormal distribution function directly, the paper presents a novel solution based on extremum equivalent conversion. The puzzle could be transformed to solve the extremum of an equivalent function about independent variable, and then mathematical expression of the extremum was obtained by conventional differential method. Take the study on location in wireless sensor network as an example. The research shows that the method can obviously reduce difficulty of the puzzle and the algorithm complexity, so the presented method provides an effective solution for similar problems.  相似文献   

20.
This article is concerned with the problem of multicollinearity in a linear model with linear restrictions. After introducing a spheral restricted condition, a new restricted ridge estimation method is proposed by minimizing the sum of squared residuals. The property of the new estimator in its superiority over the ordinary restricted least squares estimation is then theoretically analyzed. Furthermore, a sufficient and necessary condition for selecting the ridge parameter k is obtained. To simplify the selection of the ridge parameter, a sufficient condition is also given. Finally, a numerical example demonstrates the merit of the new method in the aspect of solving the multicollinearity over the ordinary restricted least squares estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号