首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The traditional non-parametric bootstrap (referred to as the n-out-of-n bootstrap) is a widely applicable and powerful tool for statistical inference, but in important situations it can fail. It is well known that by using a bootstrap sample of size m, different from n, the resulting m-out-of-n bootstrap provides a method for rectifying the traditional bootstrap inconsistency. Moreover, recent studies have shown that interesting cases exist where it is better to use the m-out-of-n bootstrap in spite of the fact that the n-out-of-n bootstrap works. In this paper, we discuss another case by considering its application to hypothesis testing. Two new data-based choices of m are proposed in this set-up. The results of simulation studies are presented to provide empirical comparisons between the performance of the traditional bootstrap and the m-out-of-n bootstrap, based on the two data-dependent choices of m, as well as on an existing method in the literature for choosing m. These results show that the m-out-of-n bootstrap, based on our choice of m, generally outperforms the traditional bootstrap procedure as well as the procedure based on the choice of m proposed in the literature.  相似文献   

2.
Strong orthogonal arrays (SOAs) were recently introduced and studied as a class of space‐filling designs for computer experiments. An important problem that has not been addressed in the literature is that of design selection for such arrays. In this article, we conduct a systematic investigation into this problem, and we focus on the most useful SOA(n,m,4,2 + )s and SOA(n,m,4,2)s. This article first addresses the problem of design selection for SOAs of strength 2+ by examining their three‐dimensional projections. Both theoretical and computational results are presented. When SOAs of strength 2+ do not exist, we formulate a general framework for the selection of SOAs of strength 2 by looking at their two‐dimensional projections. The approach is fruitful, as it is applicable when SOAs of strength 2+ do not exist and it gives rise to them when they do. The Canadian Journal of Statistics 47: 302–314; 2019 © 2019 Statistical Society of Canada  相似文献   

3.
This article studies the non null distribution of the two-sample t-statistic, or Welch statistic, under non normality. The asymptotic expansion of the non null distribution is derived up to n ?1, where n is the pooled sample size, under general conditions. It is used to compare the power with that obtained by normal theory method. A simple technique is recommended to achieve more power through a monotone transformation in practice.  相似文献   

4.
The generalized skew-normal distribution introduced by Balakrishnan (2002 Balakrishnan , N. ( 2002 ). Discussion on ‘Skew multivariate models related to hidden truncation and/or selective reporting’ by B. C. Arnold and R. J. Beaver . Test 11 : 3739 .[Web of Science ®] [Google Scholar]) is used to obtain new generalizations of univariate Cauchy distribution with two parameters, denoted by GC m, n (a, b) with m and n non-negative integer numbers and a, b ∈ R. For cases (m, n) = (1, 2), (m, n) = (2, 1), (m, n) = (0, 3) and (m, n) = (3, 0) explicit forms of the density functions are derived and compared to previous generalizations of Cauchy and skew-Cauchy distributions.  相似文献   

5.
Traditional resampling methods for estimating sampling distributions sometimes fail, and alternative approaches are then needed. For example, if the classical central limit theorem does not hold and the naïve bootstrap fails, the m/n bootstrap, based on smaller-sized resamples, may be used as an alternative. An alternative to the naïve bootstrap, the sufficient bootstrap, which uses only the distinct observations in a bootstrap sample, is another recently proposed bootstrap approach that has been suggested to reduce the computational burden associated with bootstrapping. It works as long as naïve bootstrap does. However, if the naïve bootstrap fails, so will the sufficient bootstrap. In this paper, we propose combining the sufficient bootstrap with the m/n bootstrap in order to both regain consistent estimation of sampling distributions and to reduce the computational burden of the bootstrap. We obtain necessary and sufficient conditions for asymptotic normality of the proposed method, and propose new values for the resample size m. We compare the proposed method with the naïve bootstrap, the sufficient bootstrap, and the m/n bootstrap by simulation.  相似文献   

6.
This paper provides tables for the construction and selection of tightened–normal–tightened variables sampling scheme of type TNTVSS (n 1, n 2; k). The method of designing the scheme indexed by (AQL, α) and (LQL, β) is indicated. The TNTVSS (n T , n N; k) is compared with conventional single sampling plans for variables and with TNT (n 1, n 2; c) scheme for attributes, and it is shown that the TNTVSS is more efficient.  相似文献   

7.
Suppose X1, X2, ..., Xm is a random sample of size m from a population with probability density function f(x), x>0 and let X1,m<...m,m be the corresponding order statistics. We assume m as an integer valued random variable with P(m=k)=p(1?p)k?1, k=1, 2, ... and 0 and n X1,n for fixed n characterizes the exponential distribution. In this paper we prove that under the assumption of monotone hazard rate the identical distribution of and (n?r+1) (Xr,n?Xr?1,n) for some fixed r and n with 1≤r≤n, n≥2, X0,n=0, characterizes the exponential distribution. Under the assumption of monotone hazard rate the conjecture of Kakosyan, Klebanov and Melamed follows from the above result with r=1.  相似文献   

8.
9.
We consider the situation where there is a known regression model that can be used to predict an outcome, Y, from a set of predictor variables X . A new variable B is expected to enhance the prediction of Y. A dataset of size n containing Y, X and B is available, and the challenge is to build an improved model for Y| X ,B that uses both the available individual level data and some summary information obtained from the known model for Y| X . We propose a synthetic data approach, which consists of creating m additional synthetic data observations, and then analyzing the combined dataset of size n + m to estimate the parameters of the Y| X ,B model. This combined dataset of size n + m now has missing values of B for m of the observations, and is analyzed using methods that can handle missing data (e.g., multiple imputation). We present simulation studies and illustrate the method using data from the Prostate Cancer Prevention Trial. Though the synthetic data method is applicable to a general regression context, to provide some justification, we show in two special cases that the asymptotic variances of the parameter estimates in the Y| X ,B model are identical to those from an alternative constrained maximum likelihood estimation approach. This correspondence in special cases and the method's broad applicability makes it appealing for use across diverse scenarios. The Canadian Journal of Statistics 47: 580–603; 2019 © 2019 Statistical Society of Canada  相似文献   

10.
The principal components analysis (PCA) in the frequency domain of a stationary p-dimensional time series (X n ) n∈? leads to a summarizing time series written as a linear combination series X n =∑ m C m ° X n?m . Therefore, we observe that, when the coefficients C m , m≠0, are close to 0, this PCA is close to the usual PCA, that is the PCA in the temporal domain. When the coefficients tend to 0, the corresponding limit is said to satisfy a property noted 𝒫, of which we will study the consequences. Finally, we will examine, for any series, the proximity between the two PCAs.  相似文献   

11.
Importance sampling and control variates have been used as variance reduction techniques for estimating bootstrap tail quantiles and moments, respectively. We adapt each method to apply to both quantiles and moments, and combine the methods to obtain variance reductions by factors from 4 to 30 in simulation examples.We use two innovations in control variates—interpreting control variates as a re-weighting method, and the implementation of control variates using the saddlepoint; the combination requires only the linear saddlepoint but applies to general statistics, and produces estimates with accuracy of order n -1/2 B -1, where n is the sample size and B is the bootstrap sample size.We discuss two modifications to classical importance sampling—a weighted average estimate and a mixture design distribution. These modifications make importance sampling robust and allow moments to be estimated from the same bootstrap simulation used to estimate quantiles.  相似文献   

12.
This article presents non-parametric predictive inference for future order statistics. Given the data consisting of n real-valued observations, m future observations are considered and predictive probabilities are presented for the rth-ordered future observation. In addition, joint and conditional probabilities for events involving multiple future order statistics are presented. The article further presents the use of such predictive probabilities for order statistics in statistical inference, in particular considering pairwise and multiple comparisons based on two or more independent groups of data.  相似文献   

13.
14.
In this paper we examine the failure-censored sampling plans for the two–parameter exponential distri- bution based on m random samples, each of size n. The suggested procedure is based on exact results and only the first failure time of each sample is needed. The values of the acceptability constant are also tabulated for selected values of p α 1 p β 1, α and β. Further, a comparison of the proposed sampling plans with ordinary sampling plans using a sample of size mn is made. When compared to ordinary sampling plans, the proposed plan has an advantage in terms of shorter test-time and a saving of resources.  相似文献   

15.
Multivariate combination-based permutation tests have been widely used in many complex problems. In this paper we focus on the equipower property, derived directly from the finite-sample consistency property, and we analyze the impact of the dependency structure on the combined tests. At first, we consider the finite-sample consistency property which assumes that sample sizes are fixed (and possibly small) and considers on each subject a large number of informative variables. Moreover, since permutation test statistics do not require to be standardized, we need not assume that data are homoscedastic in the alternative. The equipower property is then derived from these two notions: consider the unconditional permutation power of a test statistic T for fixed sample sizes, with V ? 2 independent and identically distributed variables and fixed effect δ, calculated in two ways: (i) by considering two V-dimensional samples sized m1 and m2, respectively; (ii) by considering two unidimensional samples sized n1 = Vm1 and n2 = Vm2, respectively. Since the unconditional power essentially depends on the non centrality induced by T, and two ways are provided with exactly the same likelihood and the same non centrality, we show that they are provided with the same power function, at least approximately. As regards both investigating the equipower property and the power behavior in presence of correlation we performed an extensive simulation study.  相似文献   

16.
17.
Asymptotic inferences about a linear combination of K independent binomial proportions are very frequent in applied research. Nevertheless, until quite recently research had been focused almost exclusively on cases of K≤2 (particularly on cases of one proportion and the difference of two proportions). This article focuses on cases of K>2, which have recently begun to receive more attention due to their great practical interest. In order to make this inference, there are several procedures which have not been compared: the score method (S0) and the method proposed by Martín Andrés et al. (W3) for adjusted Wald (which is a generalization of the method proposed by Price and Bonett) on the one hand and, on the other hand, the method of Zou et al. (N0) based on the Wilson confidence interval (which is a generalization of the Newcombe method). The article describes a new procedure (P0) based on the classic Peskun method, modifies the previous methods giving them continuity correction (methods S0c, W3c, N0c and P0c, respectively) and, finally, a simulation is made to compare the eight aforementioned procedures (which are selected from a total of 32 possible methods). The conclusion reached is that the S0c method is the best, although for very small samples (n i ≤10, ? i) the W3 method is better. The P0 method would be the optimal method if one needs a method which is almost never too liberal, but this entails using a method which is too conservative and which provides excessively wide CIs. The W3 and P0 methods have the additional advantage of being very easy to apply. A free programme which allows the application of the S0 and S0c methods (which are the most complex) can be obtained at http://www.ugr.es/local/bioest/Z_LINEAR_K.EXE.  相似文献   

18.
Consider the randomly weighted sums Sm(θ) = ∑mi = 1θiXi, 1 ? m ? n, and their maxima Mn(θ) = max?1 ? m ? nSm(θ), where Xi, 1 ? i ? n, are real-valued and dependent according to a wide type of dependence structure, and θi, 1 ? i ? n, are non negative and arbitrarily dependent, but independent of Xi, 1 ? i ? n. Under some mild conditions on the right tails of the weights θi, 1 ? i ? n, we establish some asymptotic equivalence formulas for the tail probabilities of Sn(θ) and Mn(θ) in the case where Xi, 1 ? i ? n, are dominatedly varying, long-tailed and subexponential distributions, respectively.  相似文献   

19.
Classes of higher-order kernels for estimation of a probability density are constructed by iterating the twicing procedure. Given a kernel K of order l, we build a family of kernels Km of orders l(m + 1) with the attractive property that their Fourier transforms are simply 1 — {1 —$(.)}m+1, where ? is the Fourier transform of K. These families of higher-order kernels are well suited when the fast Fourier transform is used to speed up the calculation of the kernel estimate or the least-squares cross-validation procedure for selection of the window width. We also compare the theoretical performance of the optimal polynomial-based kernels with that of the iterative twicing kernels constructed from some popular second-order kernels.  相似文献   

20.
This paper derives first-order sampling moments of individual Mahalanobis distances (MDs) in cases when the dimension p of the variable is proportional to the sample size n. Asymptotic expected values when n, p → ∞ are derived under the assumption p/nc,?0 ? c < 1. It is shown that some types of standard estimators remain unbiased in this case, while others are asymptotically biased, a property that appears to be unnoticed in the literature. Second-order moments are also supplied to give some additional insight to the matter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号