首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
Suppose it is desired to obtain a large number Ns of items for which individual counting is impractical, but one can demand a batch to weigh at least w units so that the number of items N in the batch may be close to the desired number Ns. If the items have mean weight ωTH, it is reasonable to have w equal to ωTHNs when ωTH is known. When ωTH is unknown, one can take a sample of size n, not bigger than Ns, estimate ωTH by a good estimator ωn, and set w equal to ωnNs. Let Rn = Kp2N2s/n + Ksn be a measure of loss, where Ke and Ks are the coefficients representing the cost of the error in estimation and the cost of the sampling respectively, and p is the coefficient of variation for the weight of the items. If one determines the sample size to be the integer closest to pCNs when p is known, where C is (Ke/Ks)1/2, then Rn will be minimized. If p is unknown, a simple sequential procedure is proposed for which the average sample number is shown to be asymptotically equal to the optimal fixed sample size. When the weights are assumed to have a gamma distribution given ω and ω has a prior inverted gamma distribution, the optimal sample size can be found to be the nonnegative integer closest to pCNs + p2A(pC – 1), where A is a known constant given in the prior distribution.  相似文献   

2.
Process capability indices have been widely used to evaluate the process performance to the continuous improvement of quality and productivity. The distribution of the estimator of the process capability index C pmk is very complicated and the asymptotic distribution is proposed by Chen and Hsu [The asymptotic distribution of the processes capability index C pmk , Comm. Statist. Theory Methods 24(5) (1995), pp. 1279–1291]. However, we found a critical error for the asymptotic distribution when the population mean is not equal to the midpoint of the specification limits. In this paper, a correct version of the asymptotic distribution is given. An asymptotic confidence interval of C pmk by using the correct version of asymptotic distribution is proposed and the lower bound can be used to test if the process is capable. A simulation study of the coverage probability of the proposed confidence interval is shown to be satisfactory. The relation of six sigma technique and the index C pmk is also discussed in this paper. An asymptotic testing procedure to determine if a process is capable based on the index of C pmk is also given in this paper.  相似文献   

3.
For series systems with k components it is assumed that the cause of failure is known to belong to one of the 2k − 1 possible subsets of the failure-modes. The theoretical time to failure due to k causes are assumed to have independent Weibull distributions with equal shape parameters. After finding the MLEs and the observed information matrix of (λ1, …, λk, β), a prior distribution is proposed for (λ1, …, λk), which is shown to yield a scale-invariant noninformative prior as well. No particular structure is imposed on the prior of β. Methods to obtain the marginal posterior distributions of the parameters and other parametric functions of interest and their Bayesian point and interval estimates are discussed. The developed techniques are illustrated using a numerical example.  相似文献   

4.
In this work, we study D s -optimal design for Kozak's tree taper model. The approximate D s -optimal designs are found invariant to tree size and hence create a ground to construct a general replication-free D s -optimal design. Even though the designs are found not to be dependent on the parameter value p of the Kozak's model, they are sensitive to the s×1 subset parameter vector values of the model. The 12 points replication-free design (with 91% efficiency) suggested in this study is believed to reduce cost and time for data collection and more importantly to precisely estimate the subset parameters of interest.  相似文献   

5.
Bernstein polynomials have many interesting properties. In statistics, they were mainly used to estimate density functions and regression relationships. The main objective of this paper is to promote further use of Bernstein polynomials in statistics. This includes (1) providing a high-level approximation of the moments of a continuous function g(X) of a random variable X, and (2) proving Jensen’s inequality concerning a convex function without requiring second differentiability of the function. The approximation in (1) is demonstrated to be quite superior to the delta method, which is used to approximate the variance of g(X) with the added assumption of differentiability of the function. Two numerical examples are given to illustrate the application of the proposed methodology in (1).  相似文献   

6.
In applications of spatial statistics, it is necessary to compute the product of some matrix W of spatial weights and a vector y of observations. The weighting matrix often needs to be adapted to the specific problems, such that the computation of Wy cannot necessarily be done with available R-packages. Hence, this article suggests one possibility treating such issues. The proposed technique avoids the computation of the matrix product by calculating each entry of Wy separately. Initially, a specific spatial autoregressive process is introduced. The performance of the proposed program is briefly compared to a basic program using the matrix multiplication.  相似文献   

7.
The hierarchically orthogonal functional decomposition of any measurable function η of a random vector X=(X1,?…?, Xp) consists in decomposing η(X) into a sum of increasing dimension functions depending only on a subvector of X. Even when X1,?…?, Xp are assumed to be dependent, this decomposition is unique if the components are hierarchically orthogonal. That is, two of the components are orthogonal whenever all the variables involved in one of the summands are a subset of the variables involved in the other. Setting Y=η(X), this decomposition leads to the definition of generalized sensitivity indices able to quantify the uncertainty of Y due to each dependent input in X [Chastaing G, Gamboa F, Prieur C. Generalized Hoeffding–Sobol decomposition for dependent variables – application to sensitivity analysis. Electron J Statist. 2012;6:2420–2448]. In this paper, a numerical method is developed to identify the component functions of the decomposition using the hierarchical orthogonality property. Furthermore, the asymptotic properties of the components estimation is studied, as well as the numerical estimation of the generalized sensitivity indices of a toy model. Lastly, the method is applied to a model arising from a real-world problem.  相似文献   

8.
In this paper we consider the issue of constructing retrospective T 2 control chart limits so as to control the overall probability of a false alarm at a specified value. We describe an exact method for constructing the control limits for retrospective examination. We then consider Bonferroni-adjustments to Alt's control limit and to the standard x 2 control limit as alternatives to the exact limit since it is computationally cumbersome to find the exact limit. We present the results of some simulation experiments that are carried out to compare the performance of these control limits. The results indicate that the Bonferroni-adjusted Alt's control limit performs better that the Bonferroni-adjusted x 2 control limit. Furthermore, it appears that the Bonferroni-adjusted Alt's control limit is more than adequate for controlling the overall false alarm probability at a specified value.  相似文献   

9.
An algebraic combinatorial method is used to count higher-dimensional lattice walks in ZmZm that are of length n ending at height k. As a consequence of using the method, Sands’ two-dimensional lattice walk counting problem is generalized to higher dimensions. In addition to Sands’ problem, another subclass of higher-dimensional lattice walks is also counted. Catalan type solutions are obtained and the first moments of the walks are computed. The first moments are then used to compute the average heights of the walks. Asymptotic estimates are also given.  相似文献   

10.
In this article we study the effect of truncation on the performance of an open vector-at-a-time sequential sampling procedure (P* B) proposed by Bechhofer, Kiefer and Sobel , for selecting the multinomial event which has the largest probability. The performance of the truncated version (P* B T) is compared to that of the original basic procedure (P* B). The performance characteristics studied include the probability of a correct selection, the expected number of vector-observations (n) to terminate sampling, and the variance of n. Both procedures guarantee the specified probability of a correct selection. Exact results and Monte Carlo sampling results are obtained. It is shown that P* B Tis far superior to P* B in terms of E{n} and Var{n}, particularly when the event probabilities are equal.The performance of P* B T is also compared to that of a closed vector-at-a-time sequential sampling procedure proposed for the same problem by Ramey and Alam; this procedure has here to fore been claimed to be the best one for this problem. It is shown that p* B T is superior to the Ramey-Alam procedure for most of the specifications of practical interest.  相似文献   

11.
ABSTRACT

Researchers commonly use p-values to answer the question: How strongly does the evidence favor the alternative hypothesis relative to the null hypothesis? p-Values themselves do not directly answer this question and are often misinterpreted in ways that lead to overstating the evidence against the null hypothesis. Even in the “post p?<?0.05 era,” however, it is quite possible that p-values will continue to be widely reported and used to assess the strength of evidence (if for no other reason than the widespread availability and use of statistical software that routinely produces p-values and thereby implicitly advocates for their use). If so, the potential for misinterpretation will persist. In this article, we recommend three practices that would help researchers more accurately interpret p-values. Each of the three recommended practices involves interpreting p-values in light of their corresponding “Bayes factor bound,” which is the largest odds in favor of the alternative hypothesis relative to the null hypothesis that is consistent with the observed data. The Bayes factor bound generally indicates that a given p-value provides weaker evidence against the null hypothesis than typically assumed. We therefore believe that our recommendations can guard against some of the most harmful p-value misinterpretations. In research communities that are deeply attached to reliance on “p?<?0.05,” our recommendations will serve as initial steps away from this attachment. We emphasize that our recommendations are intended merely as initial, temporary steps and that many further steps will need to be taken to reach the ultimate destination: a holistic interpretation of statistical evidence that fully conforms to the principles laid out in the ASA statement on statistical significance and p-values.  相似文献   

12.
Under proper conditions, two independent tests of the null hypothesis of homogeneity of means are provided by a set of sample averages. One test, with tail probability P 1, relates to the variation between the sample averages, while the other, with tail probability P 2, relates to the concordance of the rankings of the sample averages with the anticipated rankings under an alternative hypothesis. The quantity G = P 1 P 2 is considered as the combined test statistic and, except for the discreteness in the null distribution of P 2, would correspond to the Fisher statistic for combining probabilities. Illustration is made, for the case of four means, on how to get critical values of G or critical values of P 1 for each possible value of P 2, taking discreteness into account. Alternative measures of concordance considered are Spearman's ρ and Kendall's τ. The concept results, in the case of two averages, in assigning two-thirds of the test size to the concordant tail, one-third to the discordant tail.  相似文献   

13.
E. Csáki  I. Vincze 《Statistics》2013,47(4):531-548
Two test-statistics analogous to Pearson's chi-square test function - given in (1.6) and (1.7) - are investigated. These statistics utilize, apart from the number of sample elements lying in the respective intervals of the partition, their positions within the intervals too. It is shown that the test-statistics are asymptotically distributed - as the sample size N tends to infinity - according to the x 2distribution with parameter r, i.e. the number of intervals chosen. The limiting distribution of the test statistics under the null-hypothesis when N tends to the infinity and r =O(N α) (0<α<1), further the consistency of the tests based on these statistics is considered. Some remarks are made concerning the efficiency of the corresponding goodness of fit tests also; the authors intend to return to a more detailed treatment of the efficiency later.  相似文献   

14.
This article deals with the topic of optimal allocation of two standby redundancies in a two-component series/parallel system. There are two original components C1 and C2 which can be used to construct a series/parallel system, and two spares R1 (same as C1) and R2 (different from both C1 and C2) at hand with them being standby redundancies so as to enhance the reliability level of the system. The goal for an engineer is to seek after the optimal allocation policy in this framework. It is shown that, for the series structure, the engineer should allocate R2 to C1 and R1 to C2 provided that C1 (or R1) performs either the best or worst among all the units; otherwise, the allocation policy should be reversed. For the parallel structure, the optimal allocation strategy is just opposed to that of series case. We also provide some numerical examples for illustrating the theoretical results.  相似文献   

15.
The principal components analysis (PCA) in the frequency domain of a stationary p-dimensional time series (X n ) n∈? leads to a summarizing time series written as a linear combination series X n =∑ m C m ° X n?m . Therefore, we observe that, when the coefficients C m , m≠0, are close to 0, this PCA is close to the usual PCA, that is the PCA in the temporal domain. When the coefficients tend to 0, the corresponding limit is said to satisfy a property noted 𝒫, of which we will study the consequences. Finally, we will examine, for any series, the proximity between the two PCAs.  相似文献   

16.
The distribution of the ratio of two independent normal random variables X and Y is heavy tailed and has no moments. The shape of its density can be unimodal, bimodal, symmetric, asymmetric, and/or even similar to a normal distribution close to its mode. To our knowledge, conditions for a reasonable normal approximation to the distribution of ZX/Y have been presented in scientific literature only through simulations and empirical results. A proof of the existence of a proposed normal approximation to the distribution of Z, in an interval I centered at βE(X) /E(Y), is given here for the case where both X and Y are independent, have positive means, and their coefficients of variation fulfill some conditions. In addition, a graphical informative way of assessing the closeness of the distribution of a particular ratio X/Y to the proposed normal approximation is suggested by means of a receiver operating characteristic (ROC) curve.  相似文献   

17.
This paper discusses the problem of fitting a parametric model to the conditional variance function in a class of heteroscedastic regression models. The proposed test is based on the supremum of the Khmaladze type martingale transformation of a certain partial sum process of calibrated squared residuals. Asymptotic null distribution of this transformed process is shown to be the same as that of a time transformed standard Brownian motion. Test is shown to be consistent against a large class of fixed alternatives and to have nontrivial asymptotic power against a class of nonparametric n-1/2-localn-1/2-local alternatives, where n is the sample size. Simulation studies are conducted to assess the finite sample performance of the proposed test and to make a finite sample comparison with an existing test.  相似文献   

18.
The k nearest neighbors (k-NN) classifier is one of the most popular methods for statistical pattern recognition and machine learning. In practice, the size k, the number of neighbors used for classification, is usually arbitrarily set to one or some other small numbers, or based on the cross-validation procedure. In this study, we propose a novel alternative approach to decide the size k. Based on a k-NN-based multivariate multi-sample test, we assign each k a permutation test based Z-score. The number of NN is set to the k with the highest Z-score. This approach is computationally efficient since we have derived the formulas for the mean and variance of the test statistic under permutation distribution for multiple sample groups. Several simulation and real-world data sets are analyzed to investigate the performance of our approach. The usefulness of our approach is demonstrated through the evaluation of prediction accuracies using Z-score as a criterion to select the size k. We also compare our approach to the widely used cross-validation approaches. The results show that the size k selected by our approach yields high prediction accuracies when informative features are used for classification, whereas the cross-validation approach may fail in some cases.  相似文献   

19.
The aim of this study is to assign weights w 1, …, w m to m clustering variables Z 1, …, Z m , so that k groups were uncovered to reveal more meaningful within-group coherence. We propose a new criterion to be minimized, which is the sum of the weighted within-cluster sums of squares and the penalty for the heterogeneity in variable weights w 1, …, w m . We will present the computing algorithm for such k-means clustering, a working procedure to determine a suitable value of penalty constant and numerical examples, among which one is simulated and the other two are real.  相似文献   

20.
Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (pBH) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that pBH are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号