首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The cumulative non-central chi-square .distribution is tabulated for all combinations of values of λ = 0 (0.1) 1.0 (0.2) 3.0 (0.5) 5.0 (1.0) 34.0, y=l (I) 30 (2) 50 (5) 100 and y = 0.01 (0.01) 0.1 (0.1) 1.0 (0.2) 3.0 (0.5) 10.0 (1.0 30,0 (2.0) 50,0 (5.0) 165.0. The computations have been correctly rounded to five decimal places. Also, there is a discussion about the error involved in the computations. Furthermore, there is a discussion about possible interpolation in the table using the Lagrange's method  相似文献   

2.
The cumulative non-central chi-square distribution is tabulated for all combinations of values of α=0(0.1) 1.0 (0.2)3.0(0.5)5.0(1.0)34.0,v=1(1)30(2)50(5)100 and y=0.01 (0.01)0.1(0.1)1.0(0.2)3.0(0.5)10.0(1.0 30.0(2.0)50.0(5.0)165.0. The computations have been correctly rounded to five decimal places. Also, there is a discussion about the error involved in the computations. Furthermore, there is a discussion about possible interpolation in the table using the Lagrange's method  相似文献   

3.
The cumulative non-central chi-square distribution is tabulated for all combinations of values of λ=0(.01) 1.0 (0.2) 3.0 (0.5) 5.0 (1.0) 34.0, ν = l (1) 30 (2) 50 (5) 100 and y= 0.0l (0.01) 0.1 (0,1) 1.0 (0.2) 3.0 (0.5) 10.0 (1.0) 30.0 (2.0) 50.0 (5.0) 165.0. The computations have been correctly rounded to five decimal places. Also, there is a discussion about the error involved in the computations. Furthermore, there is a discussion about possible interpolation in the table using the Lagrange's method.  相似文献   

4.
Josef Kozák 《Statistics》2013,47(3):363-371
Working with the linear regression model (1.1) and having the extraneous information (1.2) about regression coefficients the problem exists how to build estimators (1.3) with the risk (1.4) which enable to utilize the known information in order to reduce their risk as compared with the risk (1.6) of the LSE (1.5). Solution of this problem is known for the positive definite matrix T, namely in form for estimators (1.8) and (1.10).First, it is shown that the proposed estimators (2.6),(2.9) and (2.16) based on psedoinversions of the matrix L represent the solution of the problem of the positive semidefinite matrix T=L'L.Further, the problem of interpretability of estimators in the sense of the inequality (3.1) exists; it is shown that all mentioned estimators are at least partially interpretable in the sense of requirements (3.2) or (3.10).  相似文献   

5.
Renyi (Bull. Amer. Math. Soc. 71 (6) (1965) 809) suggested a combinatorial group testing model, in which the size of a testing group was restricted. In this model, Renyi considered the search of one defective element (significant factor) from the finite set of elements (factors). The corresponding optimal search designs were obtained by Katona (J. Combin. Theory 1 (2) (1966) 174). In the present work, we study Renyi's search model of several significant factors. This problem is closely related to the concept of binary superimposed codes, which were introduced by Kautz and Singleton (IEEE Trans. Inform Theory 10 (4) (1964) 363) and were investigated by D'yachkov and Rykov (Problems Control Inform. Theory 12 (4) (1983) 229), Erdos et al. (Israel J. Math. 51 (1–2) (1985) 75), Ruszinko (J. Combin. Theory Ser. A 66 (1994) 302) and Furedi (J. Combin. Theory Ser. A 73 (1996) 172). Our goal is to prove a lower bound on the search length and to construct the optimal superimposed codes and search designs. The preliminary results have been published by D'yachkov and Rykov (Conference on Computer Science & Engineering Technology, Yerevan, Armenia, September 1997, p. 242).  相似文献   

6.
ABSTRACT

Harter (1979) summarized applications of order statistics to multivariate analysis up through 1949. The present paper covers the period 1950–1959. References in the two papers were selected from the first and second volumes, respectively, of the author's chronological annotated bibliography on order statistics [Harter (1978, 1983)]. Tintner (1950a) established formal relations between four special types of multivariate analysis: (1) canonical correlation, (2) principal components, (3) weighted regression, and (4) discriminant analysis, all of which depend on ordered roots of determinantal equations. During the decade 1950–1959, numerous authors contributed to distribution theory and/or computational methods for ordered roots and their applications to multivariate analysis. Test criteria for (i) multivariate analysis of variance, (ii) comparison of variance–covariance matrices, and (iii) multiple independence of groups of variates when the parent population is multivariate normal were usually derived from the likelihood ratio principle until S. N. Roy (1953) formulated the union–intersection principles on which Roy & Bose (1953) based their simultaneous test and confidence procedure. Roy & Bargmann (1958) used an alternative procedure, called the step–down procedure, in deriving a test for problem (iii), and J. Roy (1958) applied the step–down procedure to problem (i) and (ii), Various authors developed and applied distribution theory for several multivariate distributions. Advances were also made on multivariate tolerance regions [Fraser & Wormleighton (1951), Fraser (1951, 1953), Fraser & Guttman (1956), Kemperman (1956), and Somerville (1958)], a criterion for rejection of multivariate outliers [Kudô (1957)], and linear estimators, from censored samples, of parameters of multivariate normal populations [Watterson (1958, 1959)]. Textbooks on multivariate analysis were published by Kendall (1957) and Anderson (1958), as well as a monograph by Roy (1957) and a book of tables by Pillai (1957).  相似文献   

7.
An effective and efficient search algorithm has been developed to select from an 1(1) system zero-non-zero patterned cointegrating and loading vectors in a subset VECM, Bq(l)y(t-1) + Bq-1 (L)Ay(t) = ε( t ) , where the long term impact matrix Bq(l) contains zero entries. The algorithm can be applied to higher order integrated systems. The Finnish money-output model presented by Johansen and Juselius (1990) and the United States balanced growth model presented by King, Plosser, Stock and Watson (1991) are used to demonstrate the usefulness of this algorithm in examining the cointegrating relationships in vector time series.  相似文献   

8.
A stratified Warner''s randomized response model   总被引:2,自引:0,他引:2  
This paper proposes a new stratified randomized response model based on Warner's (J. Amer. Statist. Assoc. 60 (1965) 63) model that has an optimal allocation and large gain in precision. It also presents a drawback of the Hong et al. (Korean J. Appl. Statist. 7 (1994) 141) model under their proportional sampling assumption. It is shown that the proposed model is more efficient than the Hong et al. (Korean J. Appl. Statist. 7 (1994) 141) stratified randomized response model. Additionally, it is shown that the estimator based on the proposed method is more efficient than the Warner (J. Amer. Statist. Assoc. 60 (1965) 63), the Mangat and Singh (Biometrika 77 (1990) 439) and the Mangat (J. Roy. Statist. SQC. Ser. B 56 (1) (1994) 93) estimators under the conditions presented in both the case of completely truthful reporting and that of not completely truthful reporting by the respondents.  相似文献   

9.
Weighted methods are an important feature of multiplicity control methods. The weights must usually be chosen a priori, on the basis of experimental hypotheses. Under some conditions, however, they can be chosen making use of information from the data (therefore a posteriori) while maintaining multiplicity control. In this paper we provide: (1) a review of weighted methods for familywise type I error rate (FWE) (both parametric and nonparametric) and false discovery rate (FDR) control; (2) a review of data-driven weighted methods for FWE control; (3) a new proposal for weighted FDR control (data-driven weights) under independence among variables; (4) under any type of dependence; (5) a simulation study that assesses the performance of procedure of point 4 under various conditions.  相似文献   

10.
The randomized response (RR) technique with two decks of cards proposed by Odumade and Singh (2009) can always be made more efficient than the RR techniques proposed by Warner (1965), Mangat and Singh (1990), and Mangat (1994) by adjusting the proportion of cards in the decks. Arnab et al. (2012) generalized Odumade and Singh strategy (2009) for complex survey designs and wider class of estimators. In this paper improvement of Arnab et al. (2012) estimator has been made by using maximum likelihood method.  相似文献   

11.
Let P(t) be the probability that a subject dies at dose level t or a unit fails at stress level t,then the Bayesian methodology is used to test (i) P(t) is straight line and (ii) P(t) is convex function (concave function).  相似文献   

12.
This note presents an extension of Q-method of analysis for binary designs given by Rao (1956) to n-ary balanced and partially balanced block designs. Here a linked n-ary block (LNB) design is defined as the dual of balanced n-ary (BN) design. Having a note on Yates’ (1939, 1940) method of P-analysis, we further extend the expressions for binary linked block (LB) designs given by Rao (1956) to linked n-ary block (LNB) designs which admit easy estimation of parameters for these type of all n-ary designs.  相似文献   

13.
A new exchange algorithm for the construction of (M, S)-optimal incomplete block designs (IBDS) is developed. This exchange algorithm is used to construct 973 (M, S)-optimal IBDs (v, k, b) for v= 4,…,12 (varieties) with arbitrary v, k (block size) and b (number of blocks). The efficiencies of the “best” (M, S)-optimal IBDs constructed by this algorithm are compared with the efficiencies of the corresponding nearly balanced incomplete block designs (NBIBDs) of Cheng(1979), Cheng & Wu (1981) and Mitchell & John(1976).  相似文献   

14.
This paper provides the theoretical explanation and Monte Carlo experiments of using a modified version of Durbin-Watson ( D W ) statistic to test an 1 ( 1 ) process against I ( d ) alternatives, that is, integrated process of order d, where d is a fractional number. We provide the exact order of magnitude of the modified D W test when the data generating process is an I ( d ) process with d E (0. 1.5). Moreover, the consistency of the modified DW statistic as a unit root test against I ( d ) alternatives with d E ( 0 , l ) U ( 1 , 1.5) is proved in this paper. In addition to the theoretical analysis, Monte Carlo experiments show that the performance of the modified D W statistic reveals that it can be used as a unit root test against I ( d ) alternatives.  相似文献   

15.
In this paper, we conduct a Monte Carlo simulation study to evaluate three propensity score (PS) scenarios for estimating an average treatment effect (ATE) in observational studies when treatment switching exists: (a) ignoring treatment switching in subjects (UPS), (b) removing subjects with treatment switching (RPS), and (c) adjusting for treatment switching effect (APS) with two inverse probability weighting estimators, IPW1 and IPW2. We evaluate these six estimators in terms of bias, mean squared error (MSE), empirical standard error (ESE), and coverage probability (CP) under various simulation scenarios. Simulation results show that the IPW2 estimator with RPS has relatively good performance.  相似文献   

16.
We consider AR(q) models in time series with asymmetric innovations represented by two families ofdistributions: (i) gamma with support IR : (0, ∞), and (ii) generalized logistic with support IR:(-∞,∞). Since the ML (maximum likelihood) estimators are intractable, we derive the MML (modified maximum likelihood) estimators of the parameters and show that they are remarkably efficient besides being easy to compute. We investigate the efficiency properties of the classical LS (least squares) estimators. Their efficiencies relative to the proposed MML estimators are very low.  相似文献   

17.
In this investigation, general efficiency balanced (GEB) and efficiency balanced (EB) designs with (v + t) treatments, using (i) balanced incomplete block (BIB), (ii) symmetrical BIB, (iii) f -resolvable BIB, (iv) group divisible (GD) and (v) resolvable GD designs have been constructed with smaller number of replications and block sizes.  相似文献   

18.
In multi-stage sampling with the first stage units (fsu) chosen without replacement (WOR) with varying probability schemes (VPS) unbiased estimators (UE) of variances of homogeneous linear (HL) functions of unbiased estimators (UE) Ti's of fsu totals Yi's based on selection of subsequent stage units (SSU) from chosen fsu's are derived as homogeneous quadratic (HQ) functions of alternative less efficient UE's, say of Ti';'s of Yi's. Specific strategies are illustrated.  相似文献   

19.
Five rules that help to improve technical writing are (1) Start at the end; (2) Be prepared to revise; (3) Cut down on long words; (4) Be brief; (5) Think of the reader.  相似文献   

20.
In the field of statistical process control (SPC), control charts for attributes are widely used to detect the out-of-control condition by checking the number of nondefective units or nondefective in a sample. In this article, we use the average time to signal (ATS) and the average number of observations to signal (ANOS) to evaluate the performance of the optimal variable sample size and sampling interval (VSSI) improved square root transformation (ISRT) mean square error (MSE) (VSSI_ ISRT_ MSE) control chart for attribute data. In addition, this control chart will be used to monitor: (1) the difference between the process mean and the target value, and (2) the process variance shifts. We found that the optimal VSSI_ ISRT_ MSE chart performs better than the specific VSSI, the optimal variable sampling interval (VSI), and the fixed parameters (FP) ISRT_MSE charts. An example is given to illustrate this new proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号