首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this study, performance of single acceptance sampling plans by attribute is investigated by using the distribution of fraction nonconformance (i.e. lot quality distribution) for a dependent production process. It is the aim of this study to demonstrate that, in order to emphasize consumer risk (i.e. the risk of accepting a bad lot), it is better to evaluate a sampling plan based upon its performance as assessed by the posterior distribution of fractions nonconforming in accepted lots. Similarly, it is the desired posterior distribution that sets the basis for designing a sampling plan. The prior distribution used in this study is derived from a Markovian model of dependence.  相似文献   

2.
This article develops a variables sampling scheme for resubmitted lots by incorporating the concept of Taguchi loss function. The probability of lot acceptance is derived based on the exact sampling distribution and two-point condition on operating characteristic curve is used to determine the plan parameters that meet both the producer's and consumer's quality and risk requirements. Moreover, the performance of the proposed variables resubmitted sampling plan is investigated and compared with the classical variables single sampling plan. The results indicate that the developed resubmitted sampling plan can provide the same protection with less inspection when the submitted lot is good enough. Tables of the plan parameters under various conditions are provided and the use of the proposed plan is also illustrated with an example.  相似文献   

3.
《Econometric Reviews》2013,32(3):369-383
The paper makes two contributions. First, we provide a formula for the exact distribution of the periodogram evaluated at any arbitrary frequency, when the sample is taken from any zero-mean stationary Gaussian process. The inadequacy of the asymptotic distribution is demonstrated through an example in which the observations are generated by a fractional Gaussian noise process. The results are then applied in deriving the exact bias of the log-periodogram regression estimator (Geweke and Porter-Hudak (1983), Robinson (1995)). The formula is computable. Practical bounds on this bias are developed and their arithmetic mean is shown to be accurate and useful.  相似文献   

4.
The paper makes two contributions. First, we provide a formula for the exact distribution of the periodogram evaluated at any arbitrary frequency, when the sample is taken from any zero-mean stationary Gaussian process. The inadequacy of the asymptotic distribution is demonstrated through an example in which the observations are generated by a fractional Gaussian noise process. The results are then applied in deriving the exact bias of the log-periodogram regression estimator (Geweke and Porter-Hudak (1983), Robinson (1995)). The formula is computable. Practical bounds on this bias are developed and their arithmetic mean is shown to be accurate and useful.  相似文献   

5.
Historical control trials compare an experimental treatment with a previously conducted control treatment. By assigning all recruited samples to the experimental arm, historical control trials can better identify promising treatments in early phase trials compared with randomized control trials. Existing designs of historical control trials with survival endpoints are based on asymptotic normal distribution. However, it remains unclear whether the asymptotic distribution of the test statistic is close enough to the true distribution given relatively small sample sizes in early phase trials. In this article, we address this question by introducing an exact design approach for exponentially distributed survival endpoints, and compare it with an asymptotic design in both real examples and simulation examples. Simulation results show that the asymptotic test could lead to bias in the sample size estimation. We conclude the proposed exact design should be used in the design of historical control trials.  相似文献   

6.
The randomization design used to collect the data provides basis for the exact distributions of the permutation tests. The truncated binomial design is one of the commonly used designs for forcing balance in clinical trials to eliminate experimental bias. In this article, we consider the exact distribution of the weighted log-rank class of tests for censored data under the truncated binomial design. A double saddlepoint approximation for p-values of this class is derived under the truncated binomial design. The speed and accuracy of the saddlepoint approximation over the normal asymptotic facilitate the inversion of the weighted log-rank tests to determine nominal 95% confidence intervals for treatment effect with right censored data.  相似文献   

7.
We characterize joint tails and tail dependence for a class of stochastic volatility processes. We derive the exact joint tail shape of multivariate stochastic volatility with innovations that have a regularly varying distribution tail. This is used to give four new characterizations of tail dependence. In three cases tail dependence is a non-trivial function of linear volatility memory parametrically represented by tail scales, while tail power indices do not provide any relevant dependence information. Although tail dependence is associated with linear volatility memory, tail dependence itself is nonlinear. In the fourth case a linear function of tail events and exceedances is linearly independent. Tail dependence falls in a class that implies the celebrated Hill (1975) tail index estimator is asymptotically normal, while linear independence of nonlinear tail arrays ensures the asymptotic variance is the same as the iid case. We illustrate the latter finding by simulation.  相似文献   

8.
Copulas characterize the dependence among components of random vectors. Unlike marginal and joint distributions, which are directly observable, the copula of a random vector is a hidden dependence structure that links the joint distribution with its margins. Choosing a parametric copula model is thus a nontrivial task but it can be facilitated by relying on a nonparametric estimator. Here the authors propose a kernel estimator of the copula that is mean square consistent everywhere on the support. They determine the bias and variance of this estimator. They also study the effects of kernel smoothing on copula estimation. They then propose a smoothing bandwidth selection rule based on the derived bias and variance. After confirming their theoretical findings through simulations, they use their kernel estimator to formulate a goodness-of-fit test for parametric copula models.  相似文献   

9.
The bias bound function of an estimator is an important quantity in order to perform globally robust inference. We show how to evaluate the exact bias bound for the minimax estimator of the location parameter for a wide class of unimodal symmetric location and scale family. We show, by an example, how to obtain an upper bound of the bias bound for a unimodal asymmetric location and scale family. We provide the exact bias bound of the minimum distance/disparity estimators under a contamination neighborhood generated from the same distance.  相似文献   

10.
Multicollinearity or near exact linear dependence among the vectors of regressor variables in a multiple linear regression analysis can have important effects on the quality of least squares parameter estimates. One frequently suggested approach for these problems is principal components regression. This paper investigates alternative variable selection procedures and their implications for such an analysis.  相似文献   

11.
Most of the long memory estimators for stationary fractionally integrated time series models are known to experience non‐negligible bias in small and finite samples. Simple moment estimators are also vulnerable to such bias, but can easily be corrected. In this article, the authors propose bias reduction methods for a lag‐one sample autocorrelation‐based moment estimator. In order to reduce the bias of the moment estimator, the authors explicitly obtain the exact bias of lag‐one sample autocorrelation up to the order n−1. An example where the exact first‐order bias can be noticeably more accurate than its asymptotic counterpart, even for large samples, is presented. The authors show via a simulation study that the proposed methods are promising and effective in reducing the bias of the moment estimator with minimal variance inflation. The proposed methods are applied to the northern hemisphere data. The Canadian Journal of Statistics 37: 476–493; 2009 © 2009 Statistical Society of Canada  相似文献   

12.
We consider variable acceptance sampling plans that control the lot or process fraction defective, where a specification limit defines acceptable quality. The problem is to find a sampling plan that fulfils some conditions, usually on the operation characteristic. Its calculation heavily depends on distributional properties that, in practice, might be doubtful. If prior data are already available, we propose to estimate the sampling plan by means of bootstrap methods. The bias and standard error of the estimated plan can be assessed easily by Monte Carlo approximation to the respective bootstrap moments. This resampling approach does not require strong assumptions and, furthermore, is a flexible method that can be extended to any statistic that might be informative for the fraction defective in a lot.  相似文献   

13.
The conventional random effects model for meta-analysis of proportions approximates within-study variation using a normal distribution. Due to potential approximation bias, particularly for the estimation of rare events such as some adverse drug reactions, the conventional method is considered inferior to the exact methods based on binomial distributions. In this article, we compare two existing exact approaches—beta binomial (B-B) and normal-binomial (N-B)—through an extensive simulation study with focus on the case of rare events that are commonly encountered in medical research. In addition, we implement the empirical (“sandwich”) estimator of variance into the two models to improve the robustness of the statistical inferences. To our knowledge, it is the first such application of sandwich estimator of variance to meta-analysis of proportions. The simulation study shows that the B-B approach tends to have substantially smaller bias and mean squared error than N-B for rare events with occurrences under 5%, while N-B outperforms B-B for relatively common events. Use of the sandwich estimator of variance improves the precision of estimation for both models. We illustrate the two approaches by applying them to two published meta-analysis from the fields of orthopedic surgery and prevention of adverse drug reactions.  相似文献   

14.
For analyzing incidence data on diabetes and health problems, the bivariate geometric probability distribution is a natural choice but remained unexplored largely due to lack of models linking covariates with the probabilities of bivariate incidence of correlated outcomes. In this paper, bivariate geometric models are proposed for two correlated incidence outcomes. The extended generalized linear models are developed to take into account covariate dependence of the bivariate probabilities of correlated incidence outcomes for diabetes and heart diseases for the elderly population. The estimation and test procedures are illustrated using the Health and Retirement Study data. Two models are shown in this paper, one based on conditional-marginal approach and the other one based on the joint probability distribution with an association parameter. The joint model with association parameter appears to be a very good choice for analyzing the covariate dependence of the joint incidence of diabetes and heart diseases. Bootstrapping is performed to measure the accuracy of estimates and the results indicate very small bias.  相似文献   

15.
Many process characteristics follow an exponential distribution, and control charts based on such a distribution have attracted a lot of attention. However, traditional control limits may be not appropriate because of the lack of symmetry. In this paper, process monitoring through a normalizing power transformation is studied. The traditional individual measurement control charts can be used based on the transformed data. The properties of this control chart are investigated. A comparison with the chart when using probability limits is also carried out for cases of known and estimated parameters. Without losing much accuracy, even compared with the exact probability limits, the power transformation approach can easily be used to produce charts that can be interpreted when the normality assumption is valid.  相似文献   

16.
This paper generalizes Nagar's (1959) approximation to the finite sample mean squared error (MSE) of the instrumental variables (IV) estimator to the case in which the errors possess an elliptical distribution whose moments exist up to infinite order. This allows for types of excess kurtosis exhibited by some financial data series. This approximation is compared numerically to Knight's (1985) formulae for the exact moments of the IV estimator under nonnormality. We use the results to explore two questions on instrument selection. First, we complement Buse's (1992) analysis by considering the impact of additional instruments on both bias and MSE. Second, we evaluate the properties of Andrews's (1999) selection method in terms of the bias and MSE of the resulting IV estimator.  相似文献   

17.
This paper addresses the problem of obtaining maximum likelihood estimates for the parameters of the Pearson Type I distribution (beta distribution with unknown end points and shape parameters). Since they do not seem to have appeared in the literature, the likelihood equations and the information matrix are derived. The regularity conditions which ensure asymptotic normality and efficiency are examined, and some apparent conflicts in the literature are noted. To ensure regularity, the shape parameters must be greater than two, giving an (assymmetrical) bell-shaped distribution with high contact in the tails. A numerical investigation was carried out to explore the bias and variance of the maximum likelihood estimates and their dependence on sample size. The numerical study indicated that only for large samples (n ≥ 1000) does the bias in the estimates become small and does the Cramér-Rao bound give a good approximation for their variance. The likelihood function has a global maximum which corresponds to parameter estimates that are inadmissable. Useful parameter estimates can be obtained at a local maximum, which is sometimes difficult to locate when the sample size is small.  相似文献   

18.
ABSTRACT

Censoring frequently occurs in survival analysis but naturally observed lifetimes are not of a large size. Thus, inferences based on the popular maximum likelihood (ML) estimation which often give biased estimates should be corrected in the sense of bias. Here, we investigate the biases of ML estimates under the progressive type-II censoring scheme (pIIcs). We use a method proposed in Efron and Johnstone [Fisher's information in terms of the hazard rate. Technical Report No. 264, January 1987, Stanford University, Stanford, California; 1987] to derive general expressions for bias corrected ML estimates under the pIIcs. This requires derivation of the Fisher information matrix under the pIIcs. As an application, exact expressions are given for bias corrected ML estimates of the Weibull distribution under the pIIcs. The performance of the bias corrected ML estimates and ML estimates are compared by simulations and a real data application.  相似文献   

19.
This paper focusses mainly on three crucial choices identified in recent meta-analyses, namely (a) the effect of using approximate statistical techniques rather than exact methods, (fa) the effect of using fixed or random effect models, and (c) the effect of publication bias on the meta-analysis result. The paper considers their impact on a set of over thirty studies of passive smoking and lung cancer in non-smokers, and addresses other issues such as the role of study comparability, the choice of raw or adjusted data when using published summary statistics, and the effect of biases such as misclassification of subjects and study quality. The paper concludes that, at least in this example, different conclusions might be drawn from metaanalyses based on fixed or random effect models; that exact methods might increase estimated confidence interval widths by 5–20% over standard approximate (logit and Mantel-Haenszel) methods, and that these methods themselves differ by this order of magnitude; that taking study quality into account changes some results, and also improves homogeneity; that the use of unadjusted or author-adjusted data makes limited difference; that there appears to be obvious publication bias favouring observed raised relative risks; and that the choice of studies for inclusion is the single most critical choice made by the modeller.  相似文献   

20.
Abstract.  This article extends recent results [Scand. J. Statist. 28 (2001) 699] about exact non-parametric inferences based on order statistics with progressive type-II censoring. The extension lies in that non-parametric inferences are now covered where the dependence between involved order statistics cannot be circumvented. These inferences include: (a) tolerance intervals containing at least a specified proportion of the parent distribution, (b) prediction intervals containing at least a specified number of observations in a future sample, and (c) outer and/or inner confidence intervals for a quantile interval of the parent distribution. The inferences are valid for any parent distribution with continuous distribution function. The key result shows how the probability of an event involving k dependent order statistics that are observable/uncensored with progressive type-II censoring can be represented as a mixture with known weights of corresponding probabilities involving k dependent ordinary order statistics. Further applications/developments concerning exact Kolmogorov-type confidence regions are indicated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号