首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Weighted methods are an important feature of multiplicity control methods. The weights must usually be chosen a priori, on the basis of experimental hypotheses. Under some conditions, however, they can be chosen making use of information from the data (therefore a posteriori) while maintaining multiplicity control. In this paper we provide: (1) a review of weighted methods for familywise type I error rate (FWE) (both parametric and nonparametric) and false discovery rate (FDR) control; (2) a review of data-driven weighted methods for FWE control; (3) a new proposal for weighted FDR control (data-driven weights) under independence among variables; (4) under any type of dependence; (5) a simulation study that assesses the performance of procedure of point 4 under various conditions.  相似文献   

2.
"The [U.S.] Current Population Survey (CPS) reinterview sample consists of two subsamples: (a) a sample of CPS households is reinterviewed and the discrepancies between the reinterview responses and the original interview responses are reconciled for the purpose of obtaining more accurate responses..., and (b) a sample of CPS households, nonoverlapping with sample (a), is reinterviewed 'independently' of the original interview for the purpose of estimating simple response variance (SRV). In this article a model and estimation procedure are proposed for obtaining estimates of SRV from subsample (a) as well as the customary estimates of SRV from subsample (b).... Data from the CPS reinterview program for both subsamples (a) and (b) are analyzed both (1) to illustrate the methodology and (2) to check the validity of the CPS reinterview data. Our results indicate that data from subsample (a) are not consistent with the data from subsample (b) and provide convincing evidence that errors in subsample (a) are the source of the inconsistency."  相似文献   

3.
The cumulative non-central chi-square .distribution is tabulated for all combinations of values of λ = 0 (0.1) 1.0 (0.2) 3.0 (0.5) 5.0 (1.0) 34.0, y=l (I) 30 (2) 50 (5) 100 and y = 0.01 (0.01) 0.1 (0.1) 1.0 (0.2) 3.0 (0.5) 10.0 (1.0 30,0 (2.0) 50,0 (5.0) 165.0. The computations have been correctly rounded to five decimal places. Also, there is a discussion about the error involved in the computations. Furthermore, there is a discussion about possible interpolation in the table using the Lagrange's method  相似文献   

4.
This paper provides the theoretical explanation and Monte Carlo experiments of using a modified version of Durbin-Watson ( D W ) statistic to test an 1 ( 1 ) process against I ( d ) alternatives, that is, integrated process of order d, where d is a fractional number. We provide the exact order of magnitude of the modified D W test when the data generating process is an I ( d ) process with d E (0. 1.5). Moreover, the consistency of the modified DW statistic as a unit root test against I ( d ) alternatives with d E ( 0 , l ) U ( 1 , 1.5) is proved in this paper. In addition to the theoretical analysis, Monte Carlo experiments show that the performance of the modified D W statistic reveals that it can be used as a unit root test against I ( d ) alternatives.  相似文献   

5.
The cumulative non-central chi-square distribution is tabulated for all combinations of values of α=0(0.1) 1.0 (0.2)3.0(0.5)5.0(1.0)34.0,v=1(1)30(2)50(5)100 and y=0.01 (0.01)0.1(0.1)1.0(0.2)3.0(0.5)10.0(1.0 30.0(2.0)50.0(5.0)165.0. The computations have been correctly rounded to five decimal places. Also, there is a discussion about the error involved in the computations. Furthermore, there is a discussion about possible interpolation in the table using the Lagrange's method  相似文献   

6.
Robust test procedures are developed for testing the intercept of a simple regression model when the slope is (i) completely unspecified, (ii) specified to a fixed value, or (iii) suspected to be a fixed value. Defining (i) unrestricted (UT), (ii) restricted (RT), and (iii) pre-test test (PTT) functions for the intercept parameter under the three choices of the slope, tests are formulated using the M-estimation methodology. The asymptotic distributions of the test statistics and their asymptotic power functions are derived. The analytical and graphical comparisons of the tests reveal that the PTT achieves a reasonable dominance over the other tests.  相似文献   

7.
This paper proposes a framework for the development of instruments to measure content learning and problem-solving skills for the introductory statistics course. This framework is based upon a model of the problem-solving process central to statistical reasoning. The framework defines and interrelates six measurement tasks: (1) subjective reports; (2) reports concerning truth, falsity, or equivalence; (3) supply the appropriate missing information in a message; (4) answer a question based upon a specific message; (S) reproduce a message; and (6) carry out a procedure.  相似文献   

8.
Particle filters (PF) and auxiliary particle filters (APF) are widely used sequential Monte Carlo (SMC) techniques. In this paper we comparatively analyse, from a non asymptotic point of view, the Sampling Importance Resampling (SIR) PF with optimal conditional importance distribution (CID) and the fully adapted APF (FA). We compute the (finite samples) conditional second order moments of Monte Carlo (MC) estimators of a moment of interest of the filtering pdf, and analyse under which circumstances the FA-based estimator outperforms (or not) the optimal Sequential Importance Sampling (SIS)-based one. Our analysis is local, in the sense that we compare the estimators produced by one time step of the different SMC algorithms, starting from a common set of weighted points. This analysis enables us to propose a hybrid SIS/FA algorithm which automatically switches at each time step from one loop to the other. We finally validate our results via computer simulations.  相似文献   

9.
The cumulative non-central chi-square distribution is tabulated for all combinations of values of λ=0(.01) 1.0 (0.2) 3.0 (0.5) 5.0 (1.0) 34.0, ν = l (1) 30 (2) 50 (5) 100 and y= 0.0l (0.01) 0.1 (0,1) 1.0 (0.2) 3.0 (0.5) 10.0 (1.0) 30.0 (2.0) 50.0 (5.0) 165.0. The computations have been correctly rounded to five decimal places. Also, there is a discussion about the error involved in the computations. Furthermore, there is a discussion about possible interpolation in the table using the Lagrange's method.  相似文献   

10.
In the literature a systematic method of obtaining a group testing design is not available at present. Weideman and Raghavarao (1987a, b) gave methods for the construction of non - adaptive hypergeometric group testing designs for identifying at most two defectives by using a dual method. In the present investigation we have developed a method of construction of group testing designs from (i) Hypercubic Designs for t ≡ 3 (mod 6) and (ii) Balanced Incomplete Block Designs for t ≡ 1 (mod 6) and t ≡ 3 (mod 6). These constructions are accomplished by the use of dual designs. The designs so constructed satisfy specified properties and attained an optimal bound as discussed by Weidman and Raghavarao (1987a,b). Here it is also shown that the condition for pairwise disjoint sets of BIBD for t ≡ 1 (mod 6) given by Weideman and Raghavarao (1987b) is not true for all such designs.  相似文献   

11.
In this paper, we discuss the problem of estimating reliability (R) of a component based on maximum likelihood estimators (MLEs). The reliability of a component is given byR=P[Y<X]. Here X is a random strength of a component subjected to a random stress(Y) and (X,Y) follow a bivariate pareto(BVP) distribution. We obtain an asymptotic normal(AN) distribution of MLE of the reliability(R).  相似文献   

12.
《Econometric Reviews》2007,26(2):439-468
This paper generalizes the cointegrating model of Phillips (1991) to allow for I (0), I (1) and I (2) processes. The model has a simple form that permits a wider range of I (2) processes than are usually considered, including a more flexible form of polynomial cointegration. Further, the specification relaxes restrictions identified by Phillips (1991) on the I (1) and I (2) cointegrating vectors and restrictions on how the stochastic trends enter the system. To date there has been little work on Bayesian I (2) analysis and so this paper attempts to address this gap in the literature. A method of Bayesian inference in potentially I (2) processes is presented with application to Australian money demand using a Jeffreys prior and a shrinkage prior.  相似文献   

13.
This note presents an extension of Q-method of analysis for binary designs given by Rao (1956) to n-ary balanced and partially balanced block designs. Here a linked n-ary block (LNB) design is defined as the dual of balanced n-ary (BN) design. Having a note on Yates’ (1939, 1940) method of P-analysis, we further extend the expressions for binary linked block (LB) designs given by Rao (1956) to linked n-ary block (LNB) designs which admit easy estimation of parameters for these type of all n-ary designs.  相似文献   

14.
This paper presents a detailed comparative study of six major, leading methods for reasoning based on imperfect knowledge: (1) Bayes' rule, (2) Dempster-Shafer theory, (3) fuzzy set theory, (4) Model, (5) Cohen's system of inductive probabilities, and (6) a class of non-monotonic reasoning methods. Each method is presented and discussed in terms of theoretical content, a detailed numerical example, and a list of strengths and limitations. Purposely, the same numerical example is addressed by each method such that we are able to highlight the assumptions, and computational requirements that are specific to each method in a consistent manner.  相似文献   

15.
The Energy Information Administration, which is the statistical arm of the Department of Energy, inherited many data-collection forms, files, and publications from predecessor agencies. There is a legislative mandate to establish a National Energy Information System (NEIS). This obligation demands attention to certain issues: (a) scope of NEIS, (b) methods of collection, (c) methods of storing information, (d) classification and indexing, (e) methods of access, and (f) reporting and publication. Early stages of dealing with most of these matters are underway.  相似文献   

16.
This paper surveys the different uses of Kalman filtering in the estimation of statistical (econometric) models. The Kalman filter will be portrayed as (i) a natural generalization of exponential smoothing with a time-dependent smoothing factor, (ii) a recursive estimation technique for a variety of econometric models amenable to a state space formulation in particular for econometric models with time varying coefficients (iii) an instrument for the recursive calculation of the likelihood of the (constant) state space coefficients (iv) a means of helping to implement the scoring and EM-method for iteratively maximizing this likelihood (v) an analytical tool in asymptotic estimation theory. The concluding section points to the importance of Kalman filtering for alternatives to maximum likelihood estimation of state space parameters.  相似文献   

17.
ABSTRACT

Harter (1979) summarized applications of order statistics to multivariate analysis up through 1949. The present paper covers the period 1950–1959. References in the two papers were selected from the first and second volumes, respectively, of the author's chronological annotated bibliography on order statistics [Harter (1978, 1983)]. Tintner (1950a) established formal relations between four special types of multivariate analysis: (1) canonical correlation, (2) principal components, (3) weighted regression, and (4) discriminant analysis, all of which depend on ordered roots of determinantal equations. During the decade 1950–1959, numerous authors contributed to distribution theory and/or computational methods for ordered roots and their applications to multivariate analysis. Test criteria for (i) multivariate analysis of variance, (ii) comparison of variance–covariance matrices, and (iii) multiple independence of groups of variates when the parent population is multivariate normal were usually derived from the likelihood ratio principle until S. N. Roy (1953) formulated the union–intersection principles on which Roy & Bose (1953) based their simultaneous test and confidence procedure. Roy & Bargmann (1958) used an alternative procedure, called the step–down procedure, in deriving a test for problem (iii), and J. Roy (1958) applied the step–down procedure to problem (i) and (ii), Various authors developed and applied distribution theory for several multivariate distributions. Advances were also made on multivariate tolerance regions [Fraser & Wormleighton (1951), Fraser (1951, 1953), Fraser & Guttman (1956), Kemperman (1956), and Somerville (1958)], a criterion for rejection of multivariate outliers [Kudô (1957)], and linear estimators, from censored samples, of parameters of multivariate normal populations [Watterson (1958, 1959)]. Textbooks on multivariate analysis were published by Kendall (1957) and Anderson (1958), as well as a monograph by Roy (1957) and a book of tables by Pillai (1957).  相似文献   

18.
We prove, via the Borel-Cantelli lemma, that for every sequence of Gaussian random variables the combination of convergence in expectation and decreasing variances at fractional-polynomial rate implies strong convergence. This result has an important consequence for macroeconomic stochastic infinite-horizon models: The almost sure transversality condition (i.e., fiscal sustainability with probability one) is satisfied if (a) the discounted levels of net liabilities are Gaussian-distributed with fractional-polynomially decaying variances and (b) their means converge to zero. If (a) holds but (b) fails, the transversality condition will be almost surely violated. Hence, (a) and (b) constitute a test for almost sure fiscal sustainability.  相似文献   

19.
Renyi (Bull. Amer. Math. Soc. 71 (6) (1965) 809) suggested a combinatorial group testing model, in which the size of a testing group was restricted. In this model, Renyi considered the search of one defective element (significant factor) from the finite set of elements (factors). The corresponding optimal search designs were obtained by Katona (J. Combin. Theory 1 (2) (1966) 174). In the present work, we study Renyi's search model of several significant factors. This problem is closely related to the concept of binary superimposed codes, which were introduced by Kautz and Singleton (IEEE Trans. Inform Theory 10 (4) (1964) 363) and were investigated by D'yachkov and Rykov (Problems Control Inform. Theory 12 (4) (1983) 229), Erdos et al. (Israel J. Math. 51 (1–2) (1985) 75), Ruszinko (J. Combin. Theory Ser. A 66 (1994) 302) and Furedi (J. Combin. Theory Ser. A 73 (1996) 172). Our goal is to prove a lower bound on the search length and to construct the optimal superimposed codes and search designs. The preliminary results have been published by D'yachkov and Rykov (Conference on Computer Science & Engineering Technology, Yerevan, Armenia, September 1997, p. 242).  相似文献   

20.
Asymptotic linearity plays a key role in estimation and testing in the presence of nuisance parameters. This property is established, in the very general context of a multivariate general linear model with elliptical VARMA errors, for the serial and nonserial multivariate rank statistics considered in Hallin and Paindaveine (Ann. Statist. 30 (2002a) 1103; Bernoulli 8 (2002b) 787 Ann. Statist. 32 (2004), to appear) and Oja and Paindaveine (J. Statist. Plann. Inference (2004), to appear). These statistics, which are multivariate versions of classical signed rank statistics, involve (i) multivariate signs based either on (pseudo-)Mahalanobis residuals, or on a modified version (absolute interdirections) of Randles's interdirections, and (ii) a concept of ranks based either on (pseudo-)Mahalanobis distances or on lift-interdirections.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号