共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT This paper reviews and extends the literature on the finite sample behavior of tests for sample selection bias. Monte Carlo results show that, when the “multicollinearity problem” identified by Nawata (1993) is severe, (i) the t-test based on the Heckman–Greene variance estimator can be unreliable, (ii) the Likelihood Ratio test remains powerful, and (iii) nonnormality can be interpreted as severe sample selection bias by Maximum Likelihood methods, leading to negative Wald statistics. We also confirm previous findings (Leung and Yu, 1996) that the standard regression-based t-test (Heckman, 1979) and the asymptotically efficient Lagrange Multiplier test (Melino, 1982), are robust to nonnormality but have very little power. 相似文献
2.
This article is related with the probabilistic and statistical properties of an parametric extension of the so-called epsilon-skew-normal (ESN) distribution introduced by Mudholkar and Hutson (2000), which considers an additional shape parameter in order to increase the flexibility of the ESN distribution. Also, this article concerns likelihood inference about the parameters of the new class. In particular, the information matrix of the maximum likelihood estimators is obtained, showing that it is non singular in the special normal case. Finally, the statistical methods are illustrated with two examples based on real datasets. 相似文献
3.
Griliches and Hausman 5 and Wansbeek 11 proposed using the generalized method of moments (GMM) to obtain consistent estimators in linear regression models for longitudinal data with measurement error in one covariate, without requiring additional validation or replicate data. For usefulness of this methodology, we must extend it to the more realistic situation where more than one covariate are measured with error. Such an extension is not straightforward, since measurement errors across different covariates may be correlated. By a careful construction of the measurement error correlation structure, we are able to extend Wansbeek's GMM and show that the extended Griliches and Hausman's GMM is equivalent to the extended Wansbeek's GMM. For illustration, we apply the extended GMM to data from two medical studies, and compare it with the naive method and the method assuming only one covariate having measurement error. 相似文献
4.
We consider a new generalization of the skew-normal distribution introduced by Azzalini (1985). We denote this distribution Beta skew-normal (BSN) since it is a special case of the Beta generated distribution (Jones, 2004). Some properties of the BSN are studied. We pay attention to some generalizations of the skew-normal distribution (Bahrami et al., 2009; Sharafi and Behboodian, 2008; Yadegari et al., 2008) and to their relations with the BSN. 相似文献
5.
In Bayesian Inference it is often desirable to have a posterior density reflecting mainly the information from sample data. To achieve this purpose it is important to employ prior densities which add little information to the sample. We have in the literature many such prior densities, for example, Jeffreys (1967), Lindley (1956); (1961), Hartigan (1964), Bernardo (1979), Zellner (1984), Tibshirani (1989), etc. In the present article, we compare the posterior densities of the reliability function by using Jeffreys, the maximal data information (Zellner, 1984), Tibshirani's, and reference priors for the reliability function R(t) in a Weibull distribution. 相似文献
6.
The role of uniformity measured by the symmetric L 2-discrepancy given in Hickernell (1998) has been studied in fractional factorial designs. The issue of lower bounds on the symmetric L 2-discrepancy is crucial in the construction of uniform designs. This article reports some new lower bounds on the symmetric L 2-discrepancy for symmetric fractional factorials and for a set of asymmetric fractional factorials. It is valuable to use these lower bounds to measure uniformity of given designs. 相似文献
7.
《统计学通讯:理论与方法》2013,42(7):1533-1541
ABSTRACT The systematic sampling (SYS) design (Madow and Madow, 1944) is widely used by statistical offices due to its simplicity and efficiency (e.g., Iachan, 1982). But it suffers from a serious defect, namely, that it is impossible to unbiasedly estimate the sampling variance (Iachan, 1982) and usual variance estimators (Yates and Grundy, 1953) are inadequate and can overestimate the variance significantly (Särndal et al., 1992). We propose a novel variance estimator which is less biased and that can be implemented with any given population order. We will justify this estimator theoretically and with a Monte Carlo simulation study. 相似文献
8.
《统计学通讯:理论与方法》2012,41(16-17):3198-3210
The randomized response (RR) technique with two decks of cards proposed by Odumade and Singh (2009) can always be made more efficient than the RR techniques proposed by Warner (1965), Mangat and Singh (1990), and Mangat (1994) by adjusting the proportion of cards in the decks. The proposed method of Odumade and Singh (2009) is limited to simple random sampling with replacement (SRSWR) sampling only. In this article, generalization of Odumade and Singh strategy is provided for complex survey designs and a wider class of estimators. The results of Odumade and Singh (2009) can be derived from the proposed method as a special case. 相似文献
9.
《统计学通讯:理论与方法》2012,41(21):4034-4046
We propose an objective Bayesian approach to analyze degradation models. For the linear degradation models, two reference priors are derived, and based on this we show the posterior distributions are proper. Since the lifetime of the product is of interest in practice, a transformation is introduced to obtain the reference priors of the medium lifetime. In the posterior analysis, we explore two sampling procedures: Monte Carlo (MC) procedure and Monte Carlo Markov Chain (MCMC) procedure. A real data from Takeda and Suzuki (1983) is analyzed, and we find the results obtained by both procedures are close to the given literature. 相似文献
10.
Raja Rao et al. (1993) introduced the bivariate setting the clock back to zero property. A new variant of this property is introduced that is appropriate for analysing a broader area of practical situations. Some distributions possessing the proposed property are presented. Applications of this property for simplifying the computation of the bivariate mean residual life function and the bivariate percentile residual life function are studied. The relation between the proposed property with the one studied by Raja Rao and Talwalker (1990) and the bivariate lack of memory property is studied. 相似文献
11.
Yosihito Maruyama 《统计学通讯:理论与方法》2013,42(13):2116-2123
Srivastava (1984) defined a measure of multivariate kurtosis and derived its asymptotic distribution for samples from a multivariate normal population. Some new results are obtained by generalizing Srivastava's theorem to an asymptotic expansion up to higher order. Finally, two numerical examples are presented. 相似文献
12.
Julián de la Horra 《统计学通讯:理论与方法》2013,42(10):1905-1914
The positive false discovery rate was introduced by Storey (2003) as an alternative to the family wise error rate for the case in which we are simultaneously testing a large amount of hypotheses. The positive false discovery rate has a very nice Bayesian interpretation (as it was shown by Storey, 2003) and its robustness is analyzed. The emphasis is on the ε-contamination class (one of the most used classes of priors for Bayesian robustness) and it is shown that robustness is not obtained when the basic prior concentrates the probability on the null hypothesis. 相似文献
13.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(15):2681-2698
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997) and Song et al. (2002). In this article, similar to Henderson et al. (2002), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data. 相似文献
14.
Constantinos Petropoulos 《统计学通讯:理论与方法》2013,42(17):3153-3162
Under Stein's loss, a class of improved estimators for the scale parameter of a mixture of exponential distribution with unknown location is constructed. The method is analogous to Maruyama's (1998) construction for the variance of a normal distribution and also an extension of the result produced in Petropoulos and Kourouklis (2002). Also, robustness properties are considered. 相似文献
15.
We consider non-parametric estimation of a continuous cdf of a random vector (X 1, X 2). With bivariate RC data, it is stated in van der Laan (1996, p. 59810, Ann. Statist.), Quale et al. (2006, JASA) etc. that “it is well known that the NPMLE for continuous data is inconsistent (Tsai et al. (1986)).” The claim is based on a result in Tsai et al. (1986, p.1352, Ann. Statist.) that if X 1 is right censored but not X 2, then common ways for defining one NPMLE lead to inconsistency. If X 1 is right censored and X 2 is type I right-censored (which includes the case in Tsai et al.), we present a consistent NPMLE. The result corrects a common misinterpretation of Tsai's example (Tsai et al., 1986, Ann. Statist.). 相似文献
16.
Johnson (1970) obtained expansions for marginal posterior distributions through Taylor expansions. Here, the posterior expansion is expressed in terms of the likelihood and the prior together with their derivatives. Recently, Weng (2010) used a version of Stein's identity to derive a Bayesian Edgeworth expansion, expressed by posterior moments. Since the pivots used in these two articles are the same, it is of interest to compare these two expansions. We found that our O(t ?1/2) term agrees with Johnson's arithmetically, but the O(t ?1) term does not. The simulations confirmed this finding and revealed that our O(t ?1) term gives better performance than Johnson's. 相似文献
17.
AbstractIn this article, we improvise Singh and Grewal (2013) and Hussain et al. (2016) techniques by introducing a new two-stage randomization response process. Using the proposed new technique, we achieve better efficiency and increasing protection of privacy of respondents than the Kuk (1990), Singh and Grewal (2013) and Hussain et al. (2016) models. The relative efficiency and protection of the respondents of the proposed two-stage randomization device have been investigated through simulation study, and the situations are reported where the proposed estimator performs better than its competitors. The SAS code used to investigate the performance of the proposed strategy are also provided. 相似文献
18.
The concept of inclusion probability proportional to size sampling plans excluding adjacent units separated by at most a distance of m (≥ 1) units {IPPSEA plans} is introduced. IPPSEA plans ensure that the first-order inclusion probabilities of units are proportional to size measures of the units, while the second-order inclusion probabilities are zero for pairs of adjacent units separated by a distance of m units or less. IPPSEA plans have been obtained by making use of binary, proper, and unequireplicated block designs and linear programing approach. The performance of IPPSEA plans using Horvitz–Thompson estimator of population total has been compared with existing sampling plans such as simple random sampling without replacement (SRSWOR), balanced sampling plans excluding adjacent units {BSA (m) plans}, probability proportional to size with replacement, Hartley and Rao's plan (1962), Rao et al.'s strategy (1962), and Sampford's IPPS plan (1967) using a real life population. Unbiased estimation of Horvitz–Thompson estimator of population total is not possible in these types of plans because some of the second-order inclusion probabilities are zero. To resolve this problem, one approximate variance estimation technique has been suggested. 相似文献
19.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
20.
Mike G. Tsionas 《统计学通讯:理论与方法》2018,47(12):3022-3028
The properties of high-dimensional Bingham distributions have been studied by Kume and Walker (2014). Fallaize and Kypraios (2016) propose the Bayesian inference for the Bingham distribution and they use developments in Bayesian computation for distributions with doubly intractable normalizing constants (Møller et al. 2006; Murray, Ghahramani, and MacKay 2006). However, they rely heavily on two Metropolis updates that they need to tune. In this article, we propose instead a model selection with the marginal likelihood. 相似文献