共查询到20条相似文献,搜索用时 448 毫秒
1.
Heng Lian 《统计学通讯:理论与方法》2013,42(11):1893-1900
We extend the approach of Walker (2003); (2004) to the case of misspecified models. A sufficient condition for establishing rates of convergence is given based on a key identity involving martingales, which does not require construction of tests. We also show roughly that the result obtained by using tests can also be obtained by our approach, which demonstrates the potential wider applicability of this method. 相似文献
2.
Chung-Ho Chen 《统计学通讯:理论与方法》2013,42(10):1767-1778
Economic selection of process parameters has been an important topic in modern statistical process control. The optimum process parameters setting have a major effect on the expected profit/cost per item. There are some concerns on the problem of setting process parameters. Boucher and Jafari (1991) first considered the attribute single sampling plan applied in the selection of process target. Pulak and Al-Sultan (1996) extended Boucher and Jafari's model and presented the rectifying inspection plan for determining the optimum process mean. In this article, we further propose a modified Pulak and Al-Sultan model for determining the optimum process mean and standard deviation under the rectifying inspection plan with the average outgoing quality limit (AOQL) protection. Taguchi's (1986) symmetric quadratic quality loss function is adopted for evaluating the product quality. By solving the modified model, we can obtain the optimum process parameters with the maximum expected profit per item and the specified quality level can be reached. 相似文献
3.
Consider a skewed population. Suppose an intelligent guess could be made about an interval that contains the population mean. There may exist biased estimators with smaller mean squared error than the arithmetic mean within such an interval. This article indicates when it is advisable to shrink the arithmetic mean towards a guessed interval using root estimators. The goal is to obtain an estimator that is better near the average of natural origins. An estimator proposed. This estimator contains the Thompson (1968) ordinary shrinkage estimator, the Jenkins et al. (1973) square-root estimator, and the arithmetic sample mean as special cases. The bias and the mean squared error of the proposed more general estimator is compared with the three special cases. Shrinkage coefficients that yield minimum mean squared error estimators are obtained. The proposed estimator is considerably more efficient than the three special cases. This remains true for highly skewed populations. The merits of the proposed shrinkage square-root estimator are supported by the results of numerical and simulation studies. 相似文献
4.
Shaowen Wu 《统计学通讯:模拟与计算》2013,42(8):1590-1604
We reinvestigate the empirical problem of lag length selection in unit root tests when using the augmented Dickey–Fuller test based on GLS-detrending. We extend the Ng and Perron (1995) work on this issue by applying the finite sample critical values calculated using the formulae proposed by Cheung and Lai (1995). Unlike Ng and Perron (2001) we find through simulation studies that the method of selecting lag length using the sequential t-test in the ADF regression of GLS-detrended series performs the best in most cases. 相似文献
5.
Two types of estimates of process level, namely repeated median estimates (Siegel, 1982) and full online estimates (Gather et al., 2006) based on repeated median filters, are used to develop control charts. The distributional properties of the estimates are studied using simulation and these are found to closely follow normal distribution. The repeated median being robust against outliers with asymptotically 50% breakdown value and having small standard deviation is found to be useful as a basis for monitoring process averages. The control charts using repeated median estimates have been recommended for general use. 相似文献
6.
Yang Zhao 《统计学通讯:理论与方法》2013,42(20):3736-3744
Statistical analysis for the regression model f β(y | x, z) with missing values in the covariate vector X requires modeling of the covariate distribution g(x | z). Likelihood methods, including Ibrahim (1990), Chen (2004), and Zhao (2005), need either X or Z to be discrete. This article considers extending the likelihood methods to deal with cases where both X and Z may be continuous. We propose modeling the covariate distribution g(x | z) using a piece-wise nonparametric model, then a maximum likelihood estimate (MLE) of β can be computed following the maximum likelihood estimating procedure of Chen (2004) or Zhao (2005). The resulting estimation method is easy to implement and the asymptotic properties of the MLE follow under certain conditions. Extensive simulation studies for different models indicate that the proposed method is acceptable for practical implementation. A real data example is used to illustrate the method. 相似文献
7.
《统计学通讯:理论与方法》2013,42(9):1789-1799
Abstract In a recent article Hsueh et al. (Hsueh, H.-M., Liu, J.-P., Chen, J. J. (2001). Unconditional exact tests for equivalence or noninferiority for paired binary endpoints. Biometrics 57:478–483.) considered unconditional exact tests for paired binary endpoints. They suggested two statistics one of which is based on the restricted maximum-likelihood estimator. Properties of these statistics and the related tests are treated in this article. 相似文献
8.
Joseph V. Terza 《Econometric Reviews》2013,32(6):555-580
Based on the insightful work of Olsen (1980) for the linear context, a generic and unifying framework is developed that affords a simple extension of the classical method of Heckman (1974, 1976, 1978, 1979) to a broad class of nonlinear regression models involving endogenous switching and its two most common incarnations, endogenous sample selection and endogenous treatment effects. The approach should be appealing to applied researchers for three reasons. First, econometric applications involving endogenous switching abound. Secondly, the approach requires neither linearity of the regression function nor full parametric specification of the model. It can, in fact, be applied under the minimal parametric assumptions—i.e., specification of only the conditional means of the outcome and switching variables. Finally, it is amenable to relatively straightforward estimation methods. Examples of applications of the method are discussed. 相似文献
9.
Griliches and Hausman 5 and Wansbeek 11 proposed using the generalized method of moments (GMM) to obtain consistent estimators in linear regression models for longitudinal data with measurement error in one covariate, without requiring additional validation or replicate data. For usefulness of this methodology, we must extend it to the more realistic situation where more than one covariate are measured with error. Such an extension is not straightforward, since measurement errors across different covariates may be correlated. By a careful construction of the measurement error correlation structure, we are able to extend Wansbeek's GMM and show that the extended Griliches and Hausman's GMM is equivalent to the extended Wansbeek's GMM. For illustration, we apply the extended GMM to data from two medical studies, and compare it with the naive method and the method assuming only one covariate having measurement error. 相似文献
10.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings. 相似文献
11.
In incident cohort studies, survival data often include subjects who have had an initiate event at recruitment and may potentially experience two successive events (first and second) during the follow-up period. Since the second duration process becomes observable only if the first event has occurred, left-truncation and dependent censoring arise if the two duration times are correlated. To confront the two potential sampling biases, Chang and Tzeng (2006) provided an inverse-probability-weighted (IPW) approach for estimating the joint probability function of successive duration times. In this note, an alternative IPW approach is proposed. A simulation study is conducted to compare the two IPW approaches. 相似文献
12.
《统计学通讯:理论与方法》2013,42(11):2123-2131
ABSTRACT There are several indices for measuring the similarity of two populations, including the ratio of the number of shared species to the number of distinct species (Jaccard's index) and the conditional probability of observing a shared species (Smith et al., 1996). However, these indices only take into account the number of species and species proportions of shared species. In this article, we propose a new similarity index which includes the species proportions of both the shared and non shared species in each population, and also propose a Nonparametric Maximum Likelihood Estimator (NPMLE) for this index. Bootstrap and delta methods are used to evaluate the standard errors of the NPMLE. Based on a loss function, we also compare a class of nonparametric estimators for the proposed index in various situations. 相似文献
13.
This article is devoted to the study of the periodicity testing problem in a self-exciting threshold autoregressive (SETAR) model. The local asymptotic normality (LAN) property is shown via the adapted sufficient conditions due to Swensen (1985). Moreover, the LAN of the central sequence is established. First, we consider the case where the innovation density is specified and we obtain a parametric local asymptotic test. Second, we construct an adaptive test in the case where this density is unspecified but symmetric. The performances of these established tests are shown via simulation studies. 相似文献
14.
This article focuses the attention on the Self Exciting Threshold Autoregressive Moving Average model (SETARMA) proposed in Tong (1983). The stochastic structure of the model is discussed and different specifications are presented. Starting from one of them, we give sufficient conditions for the weak stationarity of the model that are discussed and critically compared to other results given in literature. In particular, after showing that the SETARMA model belongs to the class of the Random Coefficients Autoregressive models, widely discussed in Nicholls and Quinn (1982), we give some issues on the weak stationarity of its stochastic structure that are more general than those given in the existing literature and appear not affected by the moving average component. 相似文献
15.
Methods based on scan statistics are widely used in health-related applications to detect clusters of disease. The most common methods are based on the Bernoulli and Poisson models. Kulldorff (1997) derived the likelihood ratio test statistic for his scan method for both of these models. His scan statistic is widely used with freely available software, SaTScan? (see Kulldorff, 2005). We provide an alternative derivation of the likelihood ratio test statistic in the Poisson case. Our derivation is simpler and more general in the sense that it applies when the incidences are not aggregated into subregional counts. 相似文献
16.
In this article, we introduce a new distribution-free Shewhart-type control chart that takes into account the location of a single order statistic of the test sample (such as the median) as well as the number of observations in that test sample that lie between the control limits. Exact formulae for the alarm rate, the run length distribution, and the average run length (ARL) are all derived. A key advantage of the chart is that, due to its nonparametric nature, the false alarm rate and in-control run length distribution are the same for all continuous process distributions, and so will be naturally robust. Tables are provided for the implementation of the chart for some typical ARL values and false alarm rates. The empirical study carried out reveals that the new chart is preferable from a robustness point of view in comparison to a classical Shewhart-type chart and also the nonparametric chart of Chakraborti et al. (2004). 相似文献
17.
R. Hasan Abadi 《统计学通讯:模拟与计算》2013,42(8):1430-1443
Censored data arise naturally in a number of fields, particularly in problems of reliability and survival analysis. There are several types of censoring, in this article, we will confine ourselves to the right randomly censoring type. Recently, Ahmadi et al. (2010) considered the problem of estimating unknown parameters in a general framework based on the right randomly censored data. They assumed that the survival function of the censoring time is free of the unknown parameter. This assumption is sometimes inappropriate. In such cases, a proportional odds (PO) model may be more appropriate (Lam and Leung, 2001). Under this model, in this article, point and interval estimations for the unknown parameters are obtained. Since it is important to check the adequacy of models upon which inferences are based (Lawless, 2003, p. 465), two new goodness-of-fit tests for PO model based on right randomly censored data are proposed. The proposed procedures are applied to two real data sets due to Smith (2002). A Monte Carlo simulation study is conducted to carry out the behavior of the estimators obtained. 相似文献
18.
19.
The cost and time consumption of many industrial experimentations can be reduced using the class of supersaturated designs since this can be used for screening out the important factors from a large set of potentially active variables. A supersaturated design is a design for which there are fewer runs than effects to be estimated. Although there exists a wide study of construction methods for supersaturated designs, their analysis methods are yet in an early research stage. In this article, we propose a method for analyzing data using a correlation-based measure, named as symmetrical uncertainty. This method combines measures from the information theory field and is used as the main idea of variable selection algorithms developed in data mining. In this work, the symmetrical uncertainty is used from another viewpoint in order to determine more directly the important factors. The specific method enables us to use supersaturated designs for analyzing data of generalized linear models for a Bernoulli response. We evaluate our method by using some of the existing supersaturated designs, obtained according to methods proposed by Tang and Wu (1997) as well as by Koukouvinos et al. (2008). The comparison is performed by some simulating experiments and the Type I and Type II error rates are calculated. Additionally, Receiver Operating Characteristics (ROC) curves methodology is applied as an additional statistical tool for performance evaluation. 相似文献
20.
ABSTRACT This paper reviews and extends the literature on the finite sample behavior of tests for sample selection bias. Monte Carlo results show that, when the “multicollinearity problem” identified by Nawata (1993) is severe, (i) the t-test based on the Heckman–Greene variance estimator can be unreliable, (ii) the Likelihood Ratio test remains powerful, and (iii) nonnormality can be interpreted as severe sample selection bias by Maximum Likelihood methods, leading to negative Wald statistics. We also confirm previous findings (Leung and Yu, 1996) that the standard regression-based t-test (Heckman, 1979) and the asymptotically efficient Lagrange Multiplier test (Melino, 1982), are robust to nonnormality but have very little power. 相似文献