首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Attributes sampling is an important inspection tool in areas like product quality control, service quality control or auditing. The classical item quality scheme of attributes sampling distinguishes between conforming and nonconforming items, and measures lot quality by the lot fraction nonconforming. A more refined quality scheme rates item quality by the number of nonconformities occurring on the item, e.g., the number of defective components in a composite product or the number of erroneous entries in an accounting record, where lot quality is measured by the average number of nonconformities occurring on items in the lot. Statistical models of sampling for nonconformities rest on the idealizing assumption that the number of nonconformities on an item is unbounded. In most real cases, however, the number of nonconformities on an item has an upper bound, e.g., the number of product components or the number of entries in an accounting record. The present study develops two statistical models of sampling lots for nonconformities in the presence of an upper bound a for the number of nonconformities on each single item. For both models, the statistical properties of the sample statistics and the operating characteristics of single sampling plans are investigated. A broad numerical study compares single sampling plans with prescribed statistical properties under the bounded and unbounded quality schemes. In a large number of cases, the sample sizes for the realistic bounded models are smaller than the sample sizes for the idealizing unbounded model.  相似文献   

2.
Abstract

The Coefficient of Variation is one of the most commonly used statistical tool across various scientific fields. This paper proposes a use of the Coefficient of Variation, obtained by Sampling, to define the polynomial probability density function (pdf) of a continuous and symmetric random variable on the interval [a, b]. The basic idea behind the first proposed algorithm is the transformation of the interval from [a, b] to [0, b-a]. The chi-square goodness-of-fit test is used to compare the proposed (observed) sample distribution with the expected probability distribution. The experimental results show that the collected data are approximated by the proposed pdf. The second algorithm proposes a new method to get a fast estimate for the degree of the polynomial pdf when the random variable is normally distributed. Using the known percentages of values that lie within one, two and three standard deviations of the mean, respectively, the so-called three-sigma rule of thumb, we conclude that the degree of the polynomial pdf takes values between 1.8127 and 1.8642. In the case of a Laplace (μ, b) distribution, we conclude that the degree of the polynomial pdf takes values greater than 1. All calculations and graphs needed are done using statistical software R.  相似文献   

3.
Abstract

This paper considers the statistical analysis of masked data in a parallel system with inverse Weibull distributed components under type II censoring. Based on Gamma conjugate prior, the Bayesian estimation as well as the hierarchical Bayesian estimation for the parameters and the reliability function of system are obtained by using the Bayesian theory and the hierarchical Bayesian method. Finally, Monte Carlo simulations are provided to compare the performances of the estimates under different masking probabilities and effective sample sizes.  相似文献   

4.
ABSTRACT

In statistical practice, inferences on standardized regression coefficients are often required, but complicated by the fact that they are nonlinear functions of the parameters, and thus standard textbook results are simply wrong. Within the frequentist domain, asymptotic delta methods can be used to construct confidence intervals of the standardized coefficients with proper coverage probabilities. Alternatively, Bayesian methods solve similar and other inferential problems by simulating data from the posterior distribution of the coefficients. In this paper, we present Bayesian procedures that provide comprehensive solutions for inferences on the standardized coefficients. Simple computing algorithms are developed to generate posterior samples with no autocorrelation and based on both noninformative improper and informative proper prior distributions. Simulation studies show that Bayesian credible intervals constructed by our approaches have comparable and even better statistical properties than their frequentist counterparts, particularly in the presence of collinearity. In addition, our approaches solve some meaningful inferential problems that are difficult if not impossible from the frequentist standpoint, including identifying joint rankings of multiple standardized coefficients and making optimal decisions concerning their sizes and comparisons. We illustrate applications of our approaches through examples and make sample R functions available for implementing our proposed methods.  相似文献   

5.
Acceptance sampling, a category of statistical quality control, deals with the confidence of the product's quality. In certain times, it is necessary to deal with the error in the demanding distribution counting on the sample size and the pertained population size, in determining the necessitated sample size for the acute exactitude. Further this sample size with minimized error is utilized in deriving the most beneficial OC curve. Neural networks have been used to train the data with the resulting error and their matching toleration level for the sample sizes of different population sizes. This trained network can be used to foster automated acceptance or rejection of the sample size to be used for a better OC curve based on the minimized error, ensuing time reduction of the burdened work. It is better explained in this paper with the geo-statistics data, using SAS program.  相似文献   

6.
Investigators and epidemiologists often use statistics based on the parameters of a multinomial distribution. Two main approaches have been developed to assess the inferences of these statistics. The first one uses asymptotic formulae which are valid for large sample sizes. The second one computes the exact distribution, which performs quite well for small samples. They present some limitations for sample sizes N neither large enough to satisfy the assumption of asymptotic normality nor small enough to allow us to generate the exact distribution. We analytically computed the 1/N corrections of the asymptotic distribution for any statistics based on a multinomial law. We applied these results to the kappa statistic in 2×2 and 3×3 tables. We also compared the coverage probability obtained with the asymptotic and the corrected distributions under various hypothetical configurations of sample size and theoretical proportions. With this method, the estimate of the mean and the variance were highly improved as well as the 2.5 and the 97.5 percentiles of the distribution, allowing us to go down to sample sizes around 20, for data sets not too asymmetrical. The order of the difference between the exact and the corrected values was 1/N2 for the mean and 1/N3 for the variance.  相似文献   

7.
Abstract

We propose a unified approach for multilevel sample selection models using a generalized result on skew distributions arising from selection. If the underlying distributional assumption is normal, then the resulting density for the outcome is the continuous component of the sample selection density and has links with the closed skew-normal distribution (CSN). The CSN distribution provides a framework which simplifies the derivation of the conditional expectation of the observed data. This generalizes the Heckman’s two-step method to a multilevel sample selection model. Finite-sample performance of the maximum likelihood estimator of this model is studied through a Monte Carlo simulation.  相似文献   

8.
ABSTRACT

This article argues that researchers do not need to completely abandon the p-value, the best-known significance index, but should instead stop using significance levels that do not depend on sample sizes. A testing procedure is developed using a mixture of frequentist and Bayesian tools, with a significance level that is a function of sample size, obtained from a generalized form of the Neyman–Pearson Lemma that minimizes a linear combination of α, the probability of rejecting a true null hypothesis, and β, the probability of failing to reject a false null, instead of fixing α and minimizing β. The resulting hypothesis tests do not violate the Likelihood Principle and do not require any constraints on the dimensionalities of the sample space and parameter space. The procedure includes an ordering of the entire sample space and uses predictive probability (density) functions, allowing for testing of both simple and compound hypotheses. Accessible examples are presented to highlight specific characteristics of the new tests.  相似文献   

9.
The authors derive the limiting distribution of M‐estimators in AR(p) models under nonstandard conditions, allowing for discontinuities in score and density functions. Unlike usual regularity assumptions, these conditions are satisfied in the context of L1‐estimation and autoregression quantiles. The asymptotic distributions of the resulting estimators, however, are not generally Gaussian. Moreover, their bootstrap approximations are consistent along very specific sequences of bootstrap sample sizes only.  相似文献   

10.
Abstract

The notions of (sample) mean, median and mode are common tools for describing the central tendency of a given probability distribution. In this article, we propose a new measure of central tendency, the sample monomode, which is related to the notion of sample mode. We also illustrate the computation of the sample monomode and propose a statistical test for discrete monomodality based on the likelihood ratio statistic.  相似文献   

11.
ABSTRACT

In many statistical applications estimation of population quantiles is desired. In this study, a log–flip–robust (LFR) approach is proposed to estimate, specifically, lower-end quantiles (those below the median) from a continuous, positive, right-skewed distribution. Characteristics of common right-skewed distributions suggest that a logarithm transformation (L) followed by flipping the lower half of the sample (F) allows for the estimation of the lower-end quantile using robust methods (R) based on symmetric populations. Simulations show that this approach is superior in many cases to current methods, while not suffering from the sample size restrictions of other approaches.  相似文献   

12.
Sampling cost is a crucial factor in sample size planning, particularly when the treatment group is more expensive than the control group. To either minimize the total cost or maximize the statistical power of the test, we used the distribution-free Wilcoxon–Mann–Whitney test for two independent samples and the van Elteren test for randomized block design, respectively. We then developed approximate sample size formulas when the distribution of data is abnormal and/or unknown. This study derived the optimal sample size allocation ratio for a given statistical power by considering the cost constraints, so that the resulting sample sizes could minimize either the total cost or the total sample size. Moreover, for a given total cost, the optimal sample size allocation is recommended to maximize the statistical power of the test. The proposed formula is not only innovative, but also quick and easy. We also applied real data from a clinical trial to illustrate how to choose the sample size for a randomized two-block design. For nonparametric methods, no existing commercial software for sample size planning has considered the cost factor, and therefore the proposed methods can provide important insights related to the impact of cost constraints.  相似文献   

13.
Abstract

In the fields of internet financial transactions and reliability engineering, there could be more zero and one observations simultaneously. In this paper, considering that it is beyond the range where the conventional model can fit, zero-and-one-inflated geometric distribution regression model is proposed. Ingeniously introducing Pólya-Gamma latent variables in the Bayesian inference, posterior sampling with high-dimensional parameters is converted to latent variables sampling and posterior sampling with lower-dimensional parameters, respectively. Circumventing the need for Metropolis-Hastings sampling, the sample with higher sampling efficiency is obtained. A simulation study is conducted to assess the performance of the proposed estimation for various sample sizes. Finally, a doctoral dissertation data set is analyzed to illustrate the practicability of the proposed method, research shows that zero-and-one-inflated geometric distribution regression model using Pólya-Gamma latent variables can achieve better fitting results.  相似文献   

14.
Abstract

This paper investigates the statistical analysis of grouped accelerated temperature cycling test data when the product lifetime follows a Weibull distribution. A log-linear acceleration equation is derived from the Coffin-Manson model. The problem is transformed to a constant-stress accelerated life test with grouped data and multiple acceleration variables. The Jeffreys prior and reference priors are derived. Maximum likelihood estimation and Bayesian estimation with objective priors are obtained by applying the technique of data augmentation. A simulation study shows that both of these two methods perform well when sample size is large, and the Bayesian method gives better performance under small sample sizes.  相似文献   

15.
In this paper, we focus on Pitman closeness probabilities when the estimators are symmetrically distributed about the unknown parameter θ. We first consider two symmetric estimators θ?1 and θ?2 and obtain necessary and sufficient conditions for θ?1 to be Pitman closer to the common median θ than θ?2. We then establish some properties in the context of estimation under the Pitman closeness criterion. We define Pitman closeness probability which measures the frequency with which an individual order statistic is Pitman closer to θ than some symmetric estimator. We show that, for symmetric populations, the sample median is Pitman closer to the population median than any other independent and symmetrically distributed estimator of θ. Finally, we discuss the use of Pitman closeness probabilities in the determination of an optimal ranked set sampling scheme (denoted by RSS) for the estimation of the population median when the underlying distribution is symmetric. We show that the best RSS scheme from symmetric populations in the sense of Pitman closeness is the median and randomized median RSS for the cases of odd and even sample sizes, respectively.  相似文献   

16.
The purpose of acceptance sampling is to develop decision rules to accept or reject production lots based on sample data. When testing is destructive or expensive, dependent sampling procedures cumulate results from several preceding lots. This chaining of past lot results reduces the required size of the samples. A large part of these procedures only chain past lot results when defects are found in the current sample. However, such selective use of past lot results only achieves a limited reduction of sample sizes. In this article, a modified approach for chaining past lot results is proposed that is less selective in its use of quality history and, as a result, requires a smaller sample size than the one required for commonly used dependent sampling procedures, such as multiple dependent sampling plans and chain sampling plans of Dodge. The proposed plans are applicable for inspection by attributes and inspection by variables. Several properties of their operating characteristic-curves are derived, and search procedures are given to select such modified chain sampling plans by using the two-point method.  相似文献   

17.
The two-parameter generalized exponential (GE) distribution was introduced by Gupta and Kundu [Gupta, R.D. and Kundu, D., 1999, Generalized exponential distribution. Australian and New Zealand Journal of Statistics, 41(2), 173–188.]. It was observed that the GE can be used in situations where a skewed distribution for a nonnegative random variable is needed. In this article, the Bayesian estimation and prediction for the GE distribution, using informative priors, have been considered. Importance sampling is used to estimate the parameters, as well as the reliability function, and the Gibbs and Metropolis samplers data sets are used to predict the behavior of further observations from the distribution. Two data sets are used to illustrate the Bayesian procedure.  相似文献   

18.
ABSTRACT

The binomial exponential 2 (BE2) distribution was proposed by Bakouch et al. as a distribution of a random sum of independent exponential random variables, when the sample size has a zero truncated binomial distribution. In this article, we introduce a generalization of BE2 distribution which offers a more flexible model for lifetime data than the BE2 distribution. The hazard rate function of the proposed distribution can be decreasing, increasing, decreasing–increasing–decreasing and unimodal, so it turns out to be quite flexible for analyzing non-negative real life data. Some statistical properties and parameters estimation of the distribution are investigated. Three different algorithms are proposed for generating random data from the new distribution. Two real data applications regarding the strength data and Proschan's air-conditioner data are used to show that the new distribution is better than the BE2 distribution and some other well-known distributions in modeling lifetime data.  相似文献   

19.
ABSTRACT

We derive an analytic expression for the bias of the maximum likelihood estimator of the parameter in a doubly-truncated Poisson distribution, which proves highly effective as a means of bias correction. For smaller sample sizes, our method outperforms the alternative of bias correction via the parametric bootstrap. Bias is of little concern in the positive Poisson distribution, the most common form of truncation in the applied literature. Bias appears to be the most severe in the doubly-truncated Poisson distribution, when the mean of the distribution is close to the right (upper) truncation.  相似文献   

20.
Abstract

A key question for understanding the cross-section of expected returns of equities is the following: which factors, from a given collection of factors, are risk factors, equivalently, which factors are in the stochastic discount factor (SDF)? Though the SDF is unobserved, assumptions about which factors (from the available set of factors) are in the SDF restricts the joint distribution of factors in specific ways, as a consequence of the economic theory of asset pricing. A different starting collection of factors that go into the SDF leads to a different set of restrictions on the joint distribution of factors. The conditional distribution of equity returns has the same restricted form, regardless of what is assumed about the factors in the SDF, as long as the factors are traded, and hence the distribution of asset returns is irrelevant for isolating the risk-factors. The restricted factors models are distinct (nonnested) and do not arise by omitting or including a variable from a full model, thus precluding analysis by standard statistical variable selection methods, such as those based on the lasso and its variants. Instead, we develop what we call a Bayesian model scan strategy in which each factor is allowed to enter or not enter the SDF and the resulting restricted models (of which there are 114,674 in our empirical study) are simultaneously confronted with the data. We use a Student-t distribution for the factors, and model-specific independent Student-t distribution for the location parameters, a training sample to fix prior locations, and a creative way to arrive at the joint distribution of several other model-specific parameters from a single prior distribution. This allows our method to be essentially a scaleable and tuned-black-box method that can be applied across our large model space with little to no user-intervention. The model marginal likelihoods, and implied posterior model probabilities, are compared with the prior probability of 1/114,674 of each model to find the best-supported model, and thus the factors most likely to be in the SDF. We provide detailed simulation evidence about the high finite-sample accuracy of the method. Our empirical study with 13 leading factors reveals that the highest marginal likelihood model is a Student-t distributed factor model with 5 degrees of freedom and 8 risk factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号