首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
For the detection of influential observations on the loading matrix of the factor analysis model, we propose to use the infinitesimal version of two matrix coefficients, including Escoufier (1973)'s also discussed the application in factor analysis of some sensitivity measures used for similar purposes in principal component analysis.  相似文献   

2.
3.
Repeated Measurement Designs, with two treatments, n (experimental) units and p periods are examined, the two treatments are denoted A and B. The model with independent observations within and between treatment sequences is used. Optimal designs are derived for: (i) the difference of direct treatment effects and the difference of residual effects, (ii) the difference of direct treatment effects, and (iii) the difference of residual effects. We prove that for three periods when n is odd the optimal design in the three cases (i), (ii), and (iii) is determined by taking the sequences BAA and ABB in numbers differing by one. If n is even, the optimal design in cases (i), (ii), and (iii) is again the same, by taking the sequences ABB and BAA in equal numbers. In case (i), for n even or odd, in the optimal design there is no correlation between the two estimated parameters. For n even, case (i) was solved by Cheng and Wu in 1980. The above imply that with two treatments in practice are preferable to use three periods instead of two.  相似文献   

4.
We introduce Euler(p, q) processes as an extension of the Euler(p) processes for purposes of obtaining more parsimonious models for non stationary processes whose periodic behavior changes approximately linearly in time. The discrete Euler(p, q) models are a class of multiplicative stationary (M-stationary) processes and basic properties are derived. The relationship between continuous and discrete mixed Euler processes is shown. Fundamental to the theory and application of Euler(p, q) processes is a dual relationship between discrete Euler(p, q) processes and ARMA processes, which is established. The usefulness of Euler(p, q) processes is examined by comparing spectral estimation with that obtained by existing methods using both simulated and real data.  相似文献   

5.
This paper provides tables for the construction and selection of tightened–normal–tightened variables sampling scheme of type TNTVSS (n 1, n 2; k). The method of designing the scheme indexed by (AQL, α) and (LQL, β) is indicated. The TNTVSS (n T , n N; k) is compared with conventional single sampling plans for variables and with TNT (n 1, n 2; c) scheme for attributes, and it is shown that the TNTVSS is more efficient.  相似文献   

6.
The product method of estimation (Murthy, 1964) complements the ratio method when the study variate, y, and an auxiliary variate, x, have negative correlation. However, such cases are not frequent in survey practice. This paper suggests a simple transformation of x in the more common situation of positive correlation between y and x, to permit a product method of estimation rather than a ratio method. This leads to the advantage that the bias and mean square error have exact expressions. The technique developed by Quenouille (1956) and applied by Shukla (1976) is used for making the estimator unbiased. The minimum variance situation is investigated. Two numerical examples are included. The case of negative correlation is also examined.  相似文献   

7.
A two-point estimator is proposed for the proportion of studies with positive trends among a collection of studies, some of which may demonstrate negative trends. The proposed estimator is the y-intercept of the secant line joining the points (a, F?(a)) and (b, F?(b)), where F?(p) is the empirical distribution function of p-values from one-tailed tests for positive trend derived from the individual studies. Although this estimator is negatively biased for any choice of the points 0 ≤ a < b ≤ 1, the bias is less than that of the previously proposed one-point estimator defined by setting b = 1. The bias of the two-point estimator is smallest when a and b approach the inflection point of the true distribution function, E [F?(p)]. The utility of the two-point estimator is demonstrated by using it to estimate the number of male-mouse liver carcinogens among carcinogenicity studies conducted by the National Toxicology Program.  相似文献   

8.
Let μ be a positive measure concentrated on R+ generating a natural exponential family (NEF) F with quadratic variance function VF(m), m being the mean parameter of F. It is shown that v(dx) = (γ+x)μ(γ ≥ 0) (γ ≥ 0) generates a NEF G whose variance function is of the form l(m)Δ+cΔ(m), where l(m) is an affine function of m, Δ(m) is a polynomial in m (the mean of G) of degree 2, and c is a constant. The family G turns out to be a finite mixture of F and its length-biased family. We also examine the cases when F has cubic variance function and show that for suitable choices of γ the family G has variance function of the form P(m) + Q(m)m where P, Q are polynomials in m of degree m2 while Δ is an affine function of m. Finally we extend the idea to two dimensions by considering a bivariate Poisson and bivariate gamma mixture distribution.  相似文献   

9.
The procedure of on-line process control for variables proposed by Taguchi consists of inspecting the mth item (a single item) of every m items produced and deciding, at each inspection, whether the mean value is increased or not. If the value of the monitored statistic is outside of the control limits, one decides the process is out-of-control and the production is stopped for adjustment; otherwise, it continues. In this article, a variable sampling interval (with a longer L and a shorter m ≤ L) chart with two set of limits is used. These limits are the warning (±W) and the control (±C), where W ≤ C. The process is stopped for adjustment when an observation falls outside of the control limits or a sequence of h observations falls between the warning limits and the control limits. The longer sample interval is used after an adjustment or when an observation falls inside the warning limits; otherwise, the short sampling interval is used. The properties of an ergodic Markov chain are used to evaluate the time (in units) that the process remains in-control and out-of-control, with the aim of building an economic–statistical model. The parameters (the sampling intervals m and L, the control limits W and C and the length of run h) are optimized by minimizing the cost function with constraints on the average run lengths (ARLs) and the conformity fraction. The performance of the current proposal is more economical than the decision taken based on a sequence of length h = 1, L = m, and W = C, which is the model employed in earlier studies. A numerical example illustrates the proposed procedure.  相似文献   

10.
Some examples of steep, reproductive exponential models are considered. These models are shown to possess a τ-parallel foliation in the terminology of Barndorff-Nielsen and Blaesild. The independence of certain functions follows directly from the foliation. Suppose X(t) is a Wiener process with drift where X(t) = W(t) + ct, 0 < t < T. Furthermore let Y = max [X(s), 0 < s < T]. The joint density of Y and X = X(T), the end value, is studied within the framework of an exponential model, and it is shown that Y(Y – X) is independent of X. It is further shown that Y(Y – X) suitably scaled has an exponential distribution. Further examples are considered by randomizing on T.  相似文献   

11.
Kumar and Patel (1971) have considered the problem of testing the equality of location parameters of two exponential distributions on the basis of samples censored from above, when the scale parameters are the same and unknown. The test proposed by them is shown to be biased for n1n2, while for n1=n2 the test possesses the property of monotonicity and is equivalent to the likelihood ratio test, which is considered by Epstein and Tsao (1953) and Dubey (1963a, 1963b). Epstein and Tsao state that the test is unbiased. We may note that when the scale parameters of k exponential distributions are unknown the problem of testing the equality of location parameters is reducible to that of testing the equality of parameters in k rectangular populations for which a test and its power function were given by Khatri (1960, 1965); Jaiswal (1969) considered similar problems in his thesis. Here we extend the problem of testing the equality of k exponential distributions on the basis of samples censored from above when the scale parameters are equal and unknown, and we establish the likelihood ratio test (LET) and the union-intersection test (UIT) procedures. Using the results previously derived by Jaiswal (1969), we obtain the power function for the LET and for k= 2 show that the test possesses the property of monotonicity. The power function of the UIT is also given.  相似文献   

12.
In this article, a repairable system with age-dependent failure type and minimal repair based on a cumulative repair-cost limit policy is studied, where the information of entire repair-cost history is adopted to decide whether the system is repaired or replaced. As the failures occur, the system has two failure types: (i) a Type-I failure (minor) type that is rectified by a minimal repair, and (ii) a Type-II failure (catastrophic) type that calls for a replacement. We consider a bivariate replacement policy, denoted by (n,T), in which the system is replaced at life age T, or at the n-th Type-I failure, or at the kth Type-I failure (k < n and due to a minor failure at which the accumulated repair cost exceeds the pre-determined limit), or at the first Type-II failure, whichever occurs first. The optimal minimum-cost replacement policy (n,T)* is derived analytically in terms of its existence and uniqueness. Several classical models in maintenance literature could be regard as special cases of the presented model. Finally, a numerical example is given to illustrate the theoretical results.  相似文献   

13.
A new exchange algorithm for the construction of (M, S)-optimal incomplete block designs (IBDS) is developed. This exchange algorithm is used to construct 973 (M, S)-optimal IBDs (v, k, b) for v= 4,…,12 (varieties) with arbitrary v, k (block size) and b (number of blocks). The efficiencies of the “best” (M, S)-optimal IBDs constructed by this algorithm are compared with the efficiencies of the corresponding nearly balanced incomplete block designs (NBIBDs) of Cheng(1979), Cheng & Wu (1981) and Mitchell & John(1976).  相似文献   

14.
Janardan (1973) introduced the generalized Polya Eggenberger family of distributions (GPED) as a limiting distribution of the generalized Markov-Polya distribution (GMPD). Janardan and Rao (1982) gave a number of characterizing properties of the generalized Markov-Polya and generalized Polya Eggenberger distributions. Here, the GPED family characterized by four parameters, is formally defined and studied. The probability generating function, its moments, and certain recurrence relations with the moments are provided. The Lagrangian Katz family of distributions (Consul and Famoye (1996)) is shown to be a sub-class of the family of GPED (or GPED 1 ) as it is called in this paper). A generalized Polya Eggenberger distribution of the second kind (GPED 2 ) is also introduced and some of it's properties are given. Recurrence relations for the probabilities of GPED 1 and GPED 2 are given. A number of other structural and characteristic properties of the GPED 1 are provided, from which the properties of Lagrangian Katz family follow. The parameters of GMPD 1 are estimated by the method of moments and the maximum likelihood method. An application is provided.  相似文献   

15.
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

16.
Asymptotic inferences about a linear combination of K independent binomial proportions are very frequent in applied research. Nevertheless, until quite recently research had been focused almost exclusively on cases of K≤2 (particularly on cases of one proportion and the difference of two proportions). This article focuses on cases of K>2, which have recently begun to receive more attention due to their great practical interest. In order to make this inference, there are several procedures which have not been compared: the score method (S0) and the method proposed by Martín Andrés et al. (W3) for adjusted Wald (which is a generalization of the method proposed by Price and Bonett) on the one hand and, on the other hand, the method of Zou et al. (N0) based on the Wilson confidence interval (which is a generalization of the Newcombe method). The article describes a new procedure (P0) based on the classic Peskun method, modifies the previous methods giving them continuity correction (methods S0c, W3c, N0c and P0c, respectively) and, finally, a simulation is made to compare the eight aforementioned procedures (which are selected from a total of 32 possible methods). The conclusion reached is that the S0c method is the best, although for very small samples (n i ≤10, ? i) the W3 method is better. The P0 method would be the optimal method if one needs a method which is almost never too liberal, but this entails using a method which is too conservative and which provides excessively wide CIs. The W3 and P0 methods have the additional advantage of being very easy to apply. A free programme which allows the application of the S0 and S0c methods (which are the most complex) can be obtained at http://www.ugr.es/local/bioest/Z_LINEAR_K.EXE.  相似文献   

17.
Based on the recursive formulas of Lee (1988) and Singh and Relyea (1992) for computing the noncentral F distribution, a numerical algorithm for evaluating the distributional values of the sample squared multiple correlation coefficient is proposed. The distributional function of this statistic is usually represented as an infinite weighted sum of the iterative form of incomplete beta integral. So an effective algorithm for the incomplete beta integral is crucial to the numerical evaluation of various distribution values. Let a and b denote two shape parameters shown in the incomplete beta integral and hence formed in the sampling distribution functionn be the sample size, and p be the number of random variates. Then both 2a = p - 1 and 2b = n - p are positive integers in sampling situations so that the proposed numerical procedures in this paper are greatly simplified by recursively formulating the incomplete beta integral. By doing this, it can jointly compute the distributional values of probability dens function (pdf) and cumulative distribution function (cdf) for which the distributional value of quantile can be more efficiently obtained by Newton's method. In addition, computer codes in C are developed for demonstration and performance evaluation. For the less precision required, the implemented method can achieve the exact value with respect to the jnite significant digit desired. In general, the numerical results are apparently better than those by various approximations and interpolations of Gurland and Asiribo (1991),Gurland and Milton (1970), and Lee (1971, 1972). When b = (1/2)(n -p) is an integer in particular, the finite series formulation of Gurland (1968) is used to evaluate the pdf/cdf values without truncation errors, which are served as the pivotal one. By setting the implemented codes with double precisions, the infinite series form of derived method can achieve the pivotal values for almost all cases under study. Related comparisons and illustrations are also presented  相似文献   

18.
The mean residual life of a non negative random variable X with a finite mean is defined by M(t) = E[X ? t|X > t] for t ? 0. One model of aging is the decreasing mean residual life (DMRL): M is decreasing (non increasing) in time. It vastly generalizes the more stringent model of increasing failure rate (IFR). The exponential distribution lies at the boundary of both of these classes. There is a large literature on testing exponentiality against DMRL alternatives which are all of the integral type. Because most parametric families of DMRL distributions are IFR, their relative merits have been compared only at some IFR alternatives. We introduce a new Kolmogorov–Smirnov type sup-test and derive its asymptotic properties. We compare the powers of this test with some integral tests by simulations using a class of DMRL, but not IFR alternatives, as well as some popular IFR alternatives. The results show that the sup-test is much more powerful than the integral tests in all cases.  相似文献   

19.
In this paper, we estimate the reliability of a component subjected to two different stresses which are independent of the strength of a component. We assume that the distribution of stresses follow a bivariate exponential (BVE) distribution. If X is the strength of a component subjected to two stresses (Y 1,Y 2), then the reliability of a component is given by R=P[Y 1+Y 2<X]. We estimate R when (Y 1,Y 2) follow different BVE models proposed by Marshall-Olkin (1967), Block-Basu-(1974), Freund (1961) and Proschan-Sullo (1974). The distribution of X is assumed to be exponential. The asymptotic normal (AN) distributions of these estimates of R are obtained.  相似文献   

20.
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably.

The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon.

In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test.

Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号