首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 33 毫秒
1.
Abstract

In this paper we develop a Bayesian analysis for the nonlinear regression model with errors that follow a continuous autoregressive process. In this way, unequally spaced observations do not present a problem in the analysis. We employ the Gibbs sampler, (see Gelfand, A., Smith, A. (1990 Gelfand, A. and Smith, A. 1990. Sampling based approaches to calculating marginal densities. J. Amer. Statist. Assoc., 85: 398409. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). Sampling based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85:398–409.), as the foundation for making Bayesian inferences. We illustrate these Bayesian inferences with an analysis of a real data-set. Using these same data, we contrast the Bayesian approach with a generalized least squares technique.  相似文献   

2.
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16 E. Lesaffre, D. Rizopoulos, and R. Tsonaka, The logistics-transform for bounded outcome scores, Biostatistics 8 (2007), pp. 7285. doi: 10.1093/biostatistics/kxj034[Crossref], [PubMed], [Web of Science ®] [Google Scholar],21 M. Molas and E. Lesaffre, A comparison of the three random effects approaches to analyse repeated bounded outcome scores with an application in a stroke revalidation study, Stat. Med. 27 (2008), pp. 66126633. doi: 10.1002/sim.3432[Crossref], [PubMed], [Web of Science ®] [Google Scholar],28 R. Tsonaka, D. Rizopoulos, and E. Lesaffre, Power and sample size calculations for discrete bounded outcome scores, Stat. Med. 25 (2006), pp. 42414252. doi: 10.1002/sim.2679[Crossref], [PubMed], [Web of Science ®] [Google Scholar]] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed.  相似文献   

3.
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar], which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar]). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives.  相似文献   

4.
Sanaullah et al. (2014 Sanaullah, A., Ali, H.M., Noor ul Amin, M., Hanif, M. (2014). Generalized exponential chain ratio estimators under stratified two-phase random sampling. Appl. Math. Comput. 226:541547.[Crossref], [Web of Science ®] [Google Scholar]) have suggested generalized exponential chain ratio estimators under stratified two-phase sampling scheme for estimating the finite population mean. However, the bias and mean square error (MSE) expressions presented in that work need some corrections, and consequently the study based on efficiency comparison also requires corrections. In this article, we revisit Sanaullah et al. (2014 Sanaullah, A., Ali, H.M., Noor ul Amin, M., Hanif, M. (2014). Generalized exponential chain ratio estimators under stratified two-phase random sampling. Appl. Math. Comput. 226:541547.[Crossref], [Web of Science ®] [Google Scholar]) estimator and provide the correct bias and MSE expressions of their estimator. We also propose an estimator which is more efficient than several competing estimators including the classes of estimators in Sanaullah et al. (2014 Sanaullah, A., Ali, H.M., Noor ul Amin, M., Hanif, M. (2014). Generalized exponential chain ratio estimators under stratified two-phase random sampling. Appl. Math. Comput. 226:541547.[Crossref], [Web of Science ®] [Google Scholar]). Three real datasets are used for efficiency comparisons.  相似文献   

5.
This paper discusses the estimation of average treatment effects in observational causal inferences. By employing a working propensity score and two working regression models for treatment and control groups, Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) introduced the augmented inverse probability weighting (AIPW) method for estimation of average treatment effects, which extends the inverse probability weighting (IPW) method of Horvitz and Thompson (1952 Horvitz , D. G. , Thompson , D. J. ( 1952 ). A generalization of sampling without replacement from a finite universe . Journal of the American Statistical Association 47 : 663685 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]); the AIPW estimators are locally efficient and doubly robust. In this paper, we study a hybrid of the empirical likelihood method and the method of moments by employing three estimating functions, which can generate estimators for average treatment effects that are locally efficient and doubly robust. The proposed estimators of average treatment effects are efficient for the given choice of three estimating functions when the working propensity score is correctly specified, and thus are more efficient than the AIPW estimators. In addition, we consider a regression method for estimation of the average treatment effects when working regression models for both the treatment and control groups are correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). Finally, we present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification.  相似文献   

6.
This article considers estimation of Panel Vector Autoregressive Models of order 1 (PVAR(1)) with focus on fixed T consistent estimation methods in First Differences (FD) with additional strictly exogenous regressors. Additional results for the Panel FD ordinary least squares (OLS) estimator and the FDLS type estimator of Han and Phillips (2010 Han, C., Phillips, P. C. B. (2010). Gmm estimation for dynamic panels with fixed effects and strong instruments at unity. Econometric Theory 26:119151.[Crossref], [Web of Science ®] [Google Scholar]) are provided. Furthermore, we simplify the analysis of Binder et al. (2005 Binder, M., Hsiao, C., Pesaran, M. H. (2005). Estimation and inference in short panel vector autoregressions with unit root and cointegration. Econometric Theory 21:795837.[Crossref], [Web of Science ®] [Google Scholar]) by providing additional analytical results and extend the original model by taking into account possible cross-sectional heteroscedasticity and presence of strictly exogenous regressors. We show that in the three wave panel the log-likelihood function of the unrestricted Transformed Maximum Likelihood (TML) estimator might violate the global identification assumption. The finite-sample performance of the analyzed methods is investigated in a Monte Carlo study.  相似文献   

7.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18 R. Varshavsky, A. Gottlieb, M. Linial, and D. Horn, Novel unsupervised feature filtering of bilogical data, Bioinformatics 22 (2006), pp. 507513.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17 S.K. Pal, R.K. De, and J. Basak, Unsupervised feature evaluation: a neuro-fuzzy approach, IEEE. Trans. Neural Netw. 11 (2000), pp. 366376.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], Wang et al. [19 X.Z. Wang, Y.D. Wang, and L.J. Wang, Improving fuzzy c-means clustering based on feature-weight learning, Pattern Recognit. Lett. 25 (2004), pp. 11231132.[Crossref], [Web of Science ®] [Google Scholar]] and Hung et al. [9 W. -L. Hung, M. -S. Yang, and D. -H. Chen, Bootstrapping approach to feature-weight selection in fuzzy c-means algorithms with an application in color image segmentation, Pattern Recognit. Lett. 29 (2008), pp. 13171325.[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

8.
Ye Li 《Econometric Reviews》2017,36(1-3):289-353
We consider issues related to inference about locally ordered breaks in a system of equations, as originally proposed by Qu and Perron (2007 Qu, Z., Perron, P. (2007). Estimating and testing structural changes in multivariate regressions. Econometrica 75:459502.[Crossref], [Web of Science ®] [Google Scholar]). These apply when break dates in different equations within the system are not separated by a positive fraction of the sample size. This allows constructing joint confidence intervals of all such locally ordered break dates. We extend the results of Qu and Perron (2007 Qu, Z., Perron, P. (2007). Estimating and testing structural changes in multivariate regressions. Econometrica 75:459502.[Crossref], [Web of Science ®] [Google Scholar]) in several directions. First, we allow the covariates to be any mix of trends and stationary or integrated regressors. Second, we allow for breaks in the variance-covariance matrix of the errors. Third, we allow for multiple locally ordered breaks, each occurring in a different equation within a subset of equations in the system. Via some simulation experiments, we show first that the limit distributions derived provide good approximations to the finite sample distributions. Second, we show that forming confidence intervals in such a joint fashion allows more precision (tighter intervals) compared to the standard approach of forming confidence intervals using the method of Bai and Perron (1998 Bai, J., Perron, P. (1998). Estimating and testing linear models with multiple structural changes. Econometrica 66:4778.[Crossref], [Web of Science ®] [Google Scholar]) applied to a single equation. Simulations also indicate that using the locally ordered break confidence intervals yields better coverage rates than using the framework for globally distinct breaks when the break dates are separated by roughly 10% of the total sample size.  相似文献   

9.
The density power divergence (DPD) measure, defined in terms of a single parameter α, has proved to be a popular tool in the area of robust estimation [1 A. Basu, I.R. Harris, N.L. Hjort and M.C. Jones, Robust and efficient estimation by minimizing a density power divergence, Biometrika 85 (1998), pp. 549559. doi: 10.1093/biomet/85.3.549[Crossref], [Web of Science ®] [Google Scholar]]. Recently, Ghosh and Basu [5 A. Ghosh and A. Basu, Robust estimation for independent non-homogeneous observations using density power divergence with applications to linear regression, Electron. J. Stat. 7 (2013), pp. 24202456. doi: 10.1214/13-EJS847[Crossref], [Web of Science ®] [Google Scholar]] rigorously established the asymptotic properties of the MDPDEs in case of independent non-homogeneous observations. In this paper, we present an extensive numerical study to describe the performance of the method in the case of linear regression, the most common setup under the case of non-homogeneous data. In addition, we extend the existing methods for the selection of the optimal robustness tuning parameter from the case of independent and identically distributed (i.i.d.) data to the case of non-homogeneous observations. Proper selection of the tuning parameter is critical to the appropriateness of the resulting analysis. The selection of the optimal robustness tuning parameter is explored in the context of the linear regression problem with an extensive numerical study involving real and simulated data.  相似文献   

10.
The Jackknife-after-bootstrap (JaB) technique originally developed by Efron [8 B. Efron, Jackknife-after-bootstrap standard errors and influence functions, J. R. Stat. Soc. 54 (1992), pp. 83127. [Google Scholar]] has been proposed as an approach to improve the detection of influential observations in linear regression models by Martin and Roberts [12 M.A. Martin and S. Roberts, Jackknife-after-bootstrap regression influence diagnostics, J. Nonparametr. Stat. 22 (2010), pp. 257269. doi: 10.1080/10485250903287906[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]] and Beyaztas and Alin [2 U. Beyaztas and A. Alin, Jackknife-after-bootstrap method for detection of influential observations in linear regression model, Comm. Statist. Simulation Comput. 42 (2013), pp. 12561267. doi: 10.1080/03610918.2012.661908[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]]. The method is based on the use of percentile-method confidence intervals to provide improved cut-off values for several single case-deletion influence measures. In order to improve JaB, we propose using robust versions of Efron [7 B. Efron, Better bootstrap confidence intervals, J. Amer. Statist. Assoc. 82 (1987), pp. 171185. doi: 10.1080/01621459.1987.10478410[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]]’s bias-corrected and accelerated (BCa) bootstrap confidence intervals. In this study, the performances of robust BCa–JaB and conventional JaB methods are compared in the cases of DFFITS, Welsch's distance and modified Cook's distance influence diagnostics. Comparisons are based on both real data examples and through a simulation study. Our results reveal that under a variety of scenarios, our proposed method provides more accurate and reliable results, and it is more robust to masking effects.  相似文献   

11.
Abstract

In this article, we improvise Singh and Grewal (2013 Singh, S., and I. S. Grewal. 2013. Geometric distribution as a randomization device implemented in the Kuk’s model. International Journal of Contemporary Mathematical Sciences 8:2438.[Crossref] [Google Scholar]) and Hussain et al. (2016 Hussain, Z., J. Shabbir, Z. Pervez, S. F. Shah, and M. Khan. 2016. Generalized geometric distribution of order k: A flexible choice to randomize the response. Communications in Statistics: Simulation and Computation 46:470821.[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) techniques by introducing a new two-stage randomization response process. Using the proposed new technique, we achieve better efficiency and increasing protection of privacy of respondents than the Kuk (1990 Kuk, A. Y. C. 1990. Asking sensitive questions indirectly. Biometrika 77 (2):4368.[Crossref], [Web of Science ®] [Google Scholar]), Singh and Grewal (2013 Singh, S., and I. S. Grewal. 2013. Geometric distribution as a randomization device implemented in the Kuk’s model. International Journal of Contemporary Mathematical Sciences 8:2438.[Crossref] [Google Scholar]) and Hussain et al. (2016 Hussain, Z., J. Shabbir, Z. Pervez, S. F. Shah, and M. Khan. 2016. Generalized geometric distribution of order k: A flexible choice to randomize the response. Communications in Statistics: Simulation and Computation 46:470821.[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) models. The relative efficiency and protection of the respondents of the proposed two-stage randomization device have been investigated through simulation study, and the situations are reported where the proposed estimator performs better than its competitors. The SAS code used to investigate the performance of the proposed strategy are also provided.  相似文献   

12.
Soltani and Mohammadpour (2006 Soltani , A. R. , Mohammadpour , M. (2006). Moving average representations for multivariate stationary processes. J. Time Ser. Anal. 27(6):831841.[Crossref], [Web of Science ®] [Google Scholar]) observed that in general the backward and forward moving average coefficients, correspondingly, for the multivariate stationary processes, unlike the univariate processes, are different. This has stimulated researches concerning derivations of forward moving average coefficients in terms of the backward moving average coefficients. In this article we develop a practical procedure whenever the underlying process is a multivariate moving average (or univariate periodically correlated) process of finite order. Our procedure is based on two key observations: order reduction (Li, 2005 Li , L. M. ( 2005 ). Factorization of moving average spectral densities by state space representations and stacking . J. Multivariate Anal. 96 : 425438 .[Crossref], [Web of Science ®] [Google Scholar]) and first-order analysis (Mohammadpour and Soltani, 2010 Mohammadpour , M. , Soltani , A. R. ( 2010 ). Forward moving average representation for multivariate MA(1) processes . Commun. Statist. Theory Meth. 39 : 729737 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]).  相似文献   

13.
In this research, multiple dependent state and repetitive group sampling are used to design a variable sampling plan based on one-sided process capability indices, which consider the quality of the current lot as well as the quality of the preceding lots. The sample size and critical values of the proposed plan are determined by minimizing the average sample number while satisfying the producer's risk and consumer's risk at corresponding quality levels. In addition, comparisons are made with the existing sampling plans [Pearn and Wu (2006a Pearn, W. L., and C. W. Wu. 2006a. Critical acceptance values and sample sizes of a variables sampling plan for very low fraction of defectives. Omega: International Journal of Management Science 34 (1):90101.[Crossref], [Web of Science ®] [Google Scholar]), Yen et al. (2015 Yen, C. H., C. H. Chang, and M. Aslam. 2015. Repetitive variable acceptance sampling plan for one-sided specification. Journal of Statistical Computation and Simulation 85 (6):110216.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar])] in terms of average sample number and operating characteristic curve. Finally, an example is provided to illustrate the proposed plan.  相似文献   

14.
To better understand the power shift and the U.S. role compared to China and others regional actors, the Chicago Council on Global Affairs and the East Asia Institute (EAI) surveyed people in six countries - China, Japan, South Korea, Vietnam, Indonesian, and the United States - in the first half of 2008 about regional security and economic integration in Asia and about how these nations perceive each other (Bouton et al., 2010 Bouton, M., Steven, K., Benjamin, P., and Gregory, H. (2010). Soft power in Asia survey, 2008. ICPSR25342-v1. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2010-04-05. doi:10.3886/ICPSR25342.v1[Crossref] [Google Scholar]). There exists latent variance that cannot be adequately explained by parametric models. This is, in large part, due to the hidden structures and latent stories that from in unexpected ways. Therefore, a new Gibbs sampler is developed here in order to reveal preciously unseen structures and latent variances found in the survey dataset of Bouton et al. This new sampler is based upon the semiparametric regression, a well-known tool frequently utilized in order to capture the functional dependence between variables with fixed effect parametric and nonlinear regression. This is then extended to a generalized semiparametric regression for binary responses with logit and probit link function. The new sampler is then developed for the generalized linear mixed model with a nonparametric random effect. It is expressed as nonparametric regression with the multinomial-Dirichlet distribution for the number and positions of knots.  相似文献   

15.
The present paper suggests an interesting and useful ramification of the unrelated randomized response model due to Pal and Singh (2012 Pal, S., and S. Singh. 2012. A new unrelated question randomized response model. Statistics 46 (1):99109.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) [A new unrelated question randomized response model. Statistics 46 (1), 99–109] that can be used for any sampling scheme. We have shown theoretically and numerically that the proposed model is more efficient than Pal and Singh (2012 Pal, S., and S. Singh. 2012. A new unrelated question randomized response model. Statistics 46 (1):99109.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) model.  相似文献   

16.
This paper aimed at providing an efficient new unbiased estimator for estimating the proportion of a potentially sensitive attribute in survey sampling. The suggested randomization device makes use of the means, variances of scrambling variables, and the two scalars lie between “zero” and “one.” Thus, the same amount of information has been used at the estimation stage. The variance formula of the suggested estimator has been obtained. We have compared the proposed unbiased estimator with that of Kuk (1990 Kuk, A.Y.C. (1990). Asking sensitive questions inderectely. Biometrika 77:436438.[Crossref], [Web of Science ®] [Google Scholar]) and Franklin (1989 Franklin, L.A. (1989). A comparision of estimators for randomized response sampling with continuous distribution s from a dichotomous population. Commun. Stat. Theor. Methods 18:489505.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimators. Relevant conditions are obtained in which the proposed estimator is more efficient than Kuk (1990 Kuk, A.Y.C. (1990). Asking sensitive questions inderectely. Biometrika 77:436438.[Crossref], [Web of Science ®] [Google Scholar]) and Franklin (1989 Franklin, L.A. (1989). A comparision of estimators for randomized response sampling with continuous distribution s from a dichotomous population. Commun. Stat. Theor. Methods 18:489505.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimators. The optimum estimator (OE) in the proposed class of estimators has been identified which finally depends on moments ratios of the scrambling variables. The variance of the optimum estimator has been obtained and compared with that of the Kuk (1990 Kuk, A.Y.C. (1990). Asking sensitive questions inderectely. Biometrika 77:436438.[Crossref], [Web of Science ®] [Google Scholar]) and Franklin (1989 Franklin, L.A. (1989). A comparision of estimators for randomized response sampling with continuous distribution s from a dichotomous population. Commun. Stat. Theor. Methods 18:489505.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) estimator and Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimator. It is interesting to mention that the “optimum estimator” of the class of estimators due to Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) depends on the parameter π under investigation which limits the use of Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) OE in practice while the proposed OE in this paper is free from such a constraint. The proposed OE depends only on the moments ratios of scrambling variables. This is an advantage over the Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimator. Numerical illustrations are given in the support of the present study when the scrambling variables follow normal distribution. Theoretical and empirical results are very sound and quite illuminating in the favor of the present study.  相似文献   

17.
Gadre and Rattihalli [5 Gadre, M. P. and Rattihalli, R. N. 2006. Modified group runs control charts to detect increases in fraction non-conforming and shifts in the process mean. Commun. Stat. Simul. Comput., 35: 225240. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]] have introduced the Modified Group Runs (MGR) control chart to identify the increases in fraction non-conforming and to detect shifts in the process mean. The MGR chart reduces the out-of-control average time-to-signal (ATS), as compared with most of the well-known control charts. In this article, we develop the Side Sensitive Modified Group Runs (SSMGR) chart to detect shifts in the process mean. With the help of numerical examples, it is illustrated that the SSMGR chart performs better than the Shewhart's chart, the synthetic chart [12 Wu, Z. and Spedding, T. A. 2000. A synthetic control chart for detecting small shifts in the process mean. J. Qual. Technol., 32: 3238. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]], the Group Runs chart [4 Gadre, M. P. and Rattihalli, R. N. 2004. A group runs control chart for detecting shifts in the process mean. Econ. Qual. Control, 19: 2943. [Crossref] [Google Scholar]], the Side Sensitive Group Runs chart [6 Gadre, M. P. and Rattihalli, R. N. 2007. A side sensitive group runs control chart for detecting shifts in the process mean. Stat. Methods Appl., 16: 2737. [Crossref], [Web of Science ®] [Google Scholar]], as well as the MGR chart [5 Gadre, M. P. and Rattihalli, R. N. 2006. Modified group runs control charts to detect increases in fraction non-conforming and shifts in the process mean. Commun. Stat. Simul. Comput., 35: 225240. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]]. In some situations it is also superior to the Cumulative Sum chart p9 Page, E. S. 1954. Continuous inspection schemes. Biometrika, 41: 100114. [Crossref], [Web of Science ®] [Google Scholar]] and the exponentially weighed moving average chart [10 Roberts, S. W. 1959. Control chart tests based on geometric moving averages. Technometrics, 1: 239250. [Taylor & Francis Online] [Google Scholar]]. In the steady state also, its performance is better than the above charts.  相似文献   

18.
This article considers constructing confidence intervals for the date of a structural break in linear regression models. Using extensive simulations, we compare the performance of various procedures in terms of exact coverage rates and lengths of the confidence intervals. These include the procedures of Bai (1997 Bai, J. (1997). Estimation of a change point in multiple regressions. Review of Economics and Statistics 79:551563.[Crossref], [Web of Science ®] [Google Scholar]) based on the asymptotic distribution under a shrinking shift framework, Elliott and Müller (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) based on inverting a test locally invariant to the magnitude of break, Eo and Morley (2015 Eo, Y., Morley, J. (2015). Likelihood-ratio-based confidence sets for the timing of structural breaks. Quantitative Economics 6:463497.[Crossref], [Web of Science ®] [Google Scholar]) based on inverting a likelihood ratio test, and various bootstrap procedures. On the basis of achieving an exact coverage rate that is closest to the nominal level, Elliott and Müller's (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) approach is by far the best one. However, this comes with a very high cost in terms of the length of the confidence intervals. When the errors are serially correlated and dealing with a change in intercept or a change in the coefficient of a stationary regressor with a high signal-to-noise ratio, the length of the confidence interval increases and approaches the whole sample as the magnitude of the change increases. The same problem occurs in models with a lagged dependent variable, a common case in practice. This drawback is not present for the other methods, which have similar properties. Theoretical results are provided to explain the drawbacks of Elliott and Müller's (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) method.  相似文献   

19.
This article proposes a new likelihood-based panel cointegration rank test which extends the test of Örsal and Droge (2014 Örsal, D. D. K., Droge, B. (2014). Panel cointegration testing in the presence of a time trend. Computational Statistics and Data Analysis 76:377390.[Crossref], [Web of Science ®] [Google Scholar]) (henceforth panel SL test) to dependent panels. The dependence is modelled by unobserved common factors which affect the variables in each cross-section through heterogeneous loadings. The data are defactored following the panel analysis of nonstationarity in idiosyncratic and common components (PANIC) approach of Bai and Ng (2004 Bai, J., Ng, S. (2004). A PANIC attack on unit roots and cointegration. Econometrica 72(4):11271177.[Crossref], [Web of Science ®] [Google Scholar]) and the cointegrating rank of the defactored data is then tested by the panel SL test. A Monte Carlo study demonstrates that the proposed testing procedure has reasonable size and power properties in finite samples.  相似文献   

20.
We present results on the second order behavior and the expected maximal increments of Lamperti transforms of self-similar Gaussian processes and their exponentials. The Ornstein Uhlenbeck processes driven by fractional Brownian motion (fBM) and its exponentials have been recently studied in Ref.[ 20 Matsui , M. ; Shieh , N.-R. On the exponentials of fractional Ornstein-Uhlenbeck processes . Electron. J. Probab. 2009 , 14 , 594611 .[Crossref], [Web of Science ®] [Google Scholar] ] and Ref.[ 21 Matsui , M. ; Shieh , N.-R. On the exponential process associated with a CARMA-type process. Stochastics , 2012 . doi: 10.1080/17442508.2012.654791 .[Taylor &; Francis Online] [Google Scholar] ], where we essentially make use of some particular properties, e.g., stationary increments of fBM. Here, the treated processes are fBM, bi-fBM, and sub-fBM; the latter two are not of stationary increments. We utilize decompositions of self-similar Gaussian processes and effectively evaluate the maxima and correlations of each decomposed process. We also present discussion on the usage of the exponential stationary processes for stochastic modeling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号