首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

Acceptance sampling plans are quality tools for the manufacturer and the customer. The ultimate result of reduction of nonconforming items will increase the profit of the manufacturer and enhance the satisfaction of the consumer. In this article, a mixed double sampling plan is proposed in which the attribute double sampling inspection is used in the first stage and a variables sampling plan based on the process capability index Cpk is used in the second stage. The optimal parameters are determined so that the producer’s and the consumer’s risks are to be satisfied with minimum average sample number. The optimal parameters of the proposed plan are estimated using different plan settings using two points on the operating characteristic curve approach. In designing the proposed mixed double sampling plan, we consider the symmetric and asymmetric nonconforming cases under variables inspection. The efficiency of the proposed plan is discussed and compared with the existing sampling plans. Tables are constructed for easy selection of the optimal plan parameters and an industrial example is also included for implementation of the proposed plan.  相似文献   

2.
ABSTRACT

In incident cohort studies, survival data often include subjects who have had an initiate event at recruitment and may potentially experience two successive events (first and second) during the follow-up period. When disease registries or surveillance systems collect data based on incidence occurring within a specific calendar time interval, the initial event is usually subject to double truncation. Furthermore, since the second duration process is observable only if the first event has occurred, double truncation and dependent censoring arise. In this article, under the two sampling biases with an unspecified distribution of truncation variables, we propose a nonparametric estimator of the joint survival function of two successive duration times using the inverse-probability-weighted (IPW) approach. The consistency of the proposed estimator is established. Based on the estimated marginal survival functions, we also propose a two-stage estimation procedure for estimating the parameters of copula model. The bootstrap method is used to construct confidence interval. Numerical studies demonstrate that the proposed estimation approaches perform well with moderate sample sizes.  相似文献   

3.
In partly linear models, the dependence of the response y on (x T, t) is modeled through the relationship y=x T β+g(t)+?, where ? is independent of (x T, t). We are interested in developing an estimation procedure that allows us to combine the flexibility of the partly linear models, studied by several authors, but including some variables that belong to a non-Euclidean space. The motivating application of this paper deals with the explanation of the atmospheric SO2 pollution incidents using these models when some of the predictive variables belong in a cylinder. In this paper, the estimators of β and g are constructed when the explanatory variables t take values on a Riemannian manifold and the asymptotic properties of the proposed estimators are obtained under suitable conditions. We illustrate the use of this estimation approach using an environmental data set and we explore the performance of the estimators through a simulation study.  相似文献   

4.
Abstract

In this article, we obtain point and interval estimates of multicomponent stress-strength reliability model of an s-out-of-j system using classical and Bayesian approaches by assuming both stress and strength variables follow a Chen distribution with a common shape parameter which may be known or unknown. The uniformly minimum variance unbiased estimator of reliability is obtained analytically when the common parameter is known. The behavior of proposed reliability estimates is studied using the estimated risks through Monte Carlo simulations and comments are obtained. Finally, a data set is analyzed for illustrative purposes.  相似文献   

5.
Abstract

This article is concerned with the comparison of Bayesian and classical testing of a point null hypothesis for the Pareto distribution when there is a nuisance parameter. In the first stage, using a fixed prior distribution, the posterior probability is obtained and compared with the P-value. In the second case, lower bounds of the posterior probability of H0, under a reasonable class of prior distributions, are compared with the P-value. It has been shown that even in the presence of nuisance parameters for the model, these two approaches can lead to different results in statistical inference.  相似文献   

6.
ABSTRACT

It has been shown that equilibrium restrictions in a search model can be used to identify quantiles of the search cost distribution from observedprices alone. These quantiles can be difficult to estimate in practice. This article uses a minimum distance approach to estimate them that is easy to compute. A version of our estimator is a solution to a nonlinear least-square problem that can be straightforwardly programmed on softwares such as STATA. We show our estimator is consistent and has an asymptotic normal distribution. Its distribution can be consistently estimated by a bootstrap. Our estimator can be used to estimate the cost distribution nonparametrically on a larger support when prices from heterogenous markets are available. We propose a two-step sieve estimator for that case. The first step estimates quantiles from each market. They are used in the second step as generated variables to perform nonparametric sieve estimation. We derive the uniform rate of convergence of the sieve estimator that can be used to quantify the errors incurred from interpolating data across markets. To illustrate we use online bookmaking odds for English football leagues’ matches (as prices) and find evidence that suggests search costs for consumers have fallen following a change in the British law that allows gambling operators to advertise more widely. Supplementary materials for this article are available online.  相似文献   

7.
The estimation of extreme conditional quantiles is an important issue in different scientific disciplines. Up to now, the extreme value literature focused mainly on estimation procedures based on independent and identically distributed samples. Our contribution is a two-step procedure for estimating extreme conditional quantiles. In a first step nonextreme conditional quantiles are estimated nonparametrically using a local version of [Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica, 46, 33–50.] regression quantile methodology. Next, these nonparametric quantile estimates are used as analogues of univariate order statistics in procedures for extreme quantile estimation. The performance of the method is evaluated for both heavy tailed distributions and distributions with a finite right endpoint using a small sample simulation study. A bootstrap procedure is developed to guide in the selection of an optimal local bandwidth. Finally the procedure is illustrated in two case studies.  相似文献   

8.
Abstract

In this article, a finite source discrete-time queueing system is modeled as a discrete-time homogeneous Markov system with finite state size capacities (HMS/c) and transition priorities. This Markov system is comprised of three states. The first state of the HMS/c corresponds to the source and the second one to the state with the servers. The second state has a finite capacity which corresponds to the number of servers. The members of the system which can not enter the second state, due to its finite capacity, enter the third state which represents the system's queue. In order to examine the variability of the state sizes recursive formulae for their factorial and mixed factorial moments are derived in matrix form. As a consequence the probability mass function of each state size can be evaluated. Also the expected time in queue is computed by means of the interval transition probabilities. The theoretical results are illustrated by a numerical example.  相似文献   

9.
ABSTRACT

Background: Instrumental variables (IVs) have become much easier to find in the “Big data era” which has increased the number of applications of the Two-Stage Least Squares model (TSLS). With the increased availability of IVs, the possibility that these IVs are weak has increased. Prior work has suggested a ‘rule of thumb’ that IVs with a first stage F statistic at least ten will avoid a relative bias in point estimates greater than 10%. We investigated whether or not this threshold was also an efficient guarantee of low false rejection rates of the null hypothesis test in TSLS applications with many IVs.

Objective: To test how the ‘rule of thumb’ for weak instruments performs in predicting low false rejection rates in the TSLS model when the number of IVs is large.

Method: We used a Monte Carlo approach to create 28 original data sets for different models with the number of IVs varying from 3 to 30. For each model, we generated 2000 observations for each iteration and conducted 50,000 iterations to reach convergence in rejection rates. The point estimate was set to 0, and probabilities of rejecting this hypothesis were recorded for each model as a measurement of false rejection rate. The relationship between the endogenous variable and IVs was carefully adjusted to let the F statistics for the first stage model equal ten, thus simulating the ‘rule of thumb.’

Results: We found that the false rejection rates (type I errors) increased when the number of IVs in the TSLS model increased while holding the F statistics for the first stage model equal to 10. The false rejection rate exceeds 10% when TLSL has 24 IVs and exceed 15% when TLSL has 30 IVs.

Conclusion: When more instrumental variables were applied in the model, the ‘rule of thumb’ was no longer an efficient guarantee for good performance in hypothesis testing. A more restricted margin for F statistics is recommended to replace the ‘rule of thumb,’ especially when the number of instrumental variables is large.  相似文献   

10.
In this article, we discuss the parameter estimation for a k-factor generalized long-memory process with conditionally heteroskedastic noise. Two estimation methods are proposed. The first method is based on the conditional distribution of the process and the second is obtained as an extension of Whittle's estimation approach. For comparison purposes, Monte Carlo simulations are used to evaluate the finite sample performance of these estimation techniques, using four different conditional distribution functions.  相似文献   

11.
In this paper, a nonlinear model with response variables missing at random is studied. In order to improve the coverage accuracy for model parameters, the empirical likelihood (EL) ratio method is considered. On the complete data, the EL statistic for the parameters and its approximation have a χ2 asymptotic distribution. When the responses are reconstituted using a semi-parametric method, the empirical log-likelihood on the response variables associated with the imputed data is also asymptotically χ2. The Wilks theorem for EL on the parameters, based on reconstituted data, is also satisfied. These results can be used to construct the confidence region for the model parameters and the response variables. It is shown via Monte Carlo simulations that the EL methods outperform the normal approximation-based method in terms of coverage probability for the unknown parameter, including on the reconstituted data. The advantages of the proposed method are exemplified on real data.  相似文献   

12.
Abstract

Many researchers used auxiliary information together with survey variable to improve the efficiency of population parameters like mean, variance, total and proportion. Ratio and regression estimation are the most commonly used methods that utilized auxiliary information in different ways to get the maximum benefits in the form of high precision of the estimators. Thompson first introduced the concept of Adaptive cluster sampling, which is an appropriate technique for collecting the samples from rare and clustered populations. In this article, a generalized exponential type estimator is proposed and its properties have been studied for the estimation of rare and highly clustered population variance using single auxiliary information. A numerical study is carried out on a real and artificial population to judge the performance of the proposed estimator over the competing estimators. It is shown that the proposed generalized exponential type estimator is more efficient than the adaptive and non adaptive estimators under conventional sampling design.  相似文献   

13.
Marginal imputation, that consists of imputing items separately, generally leads to biased estimators of bivariate parameters such as finite population coefficients of correlation. To overcome this problem, two main approaches have been considered in the literature: the first consists of using customary imputation methods such as random hot‐deck imputation and adjusting for the bias at the estimation stage. This approach was studied in Skinner & Rao 2002 . In this paper, we extend the results of Skinner & Rao 2002 to the case of arbitrary sampling designs and three variants of random hot‐deck imputation. The second approach consists of using an imputation method, which preserves the relationship between variables. Shao & Wang 2002 proposed a joint random regression imputation procedure that succeeds in preserving the relationships between two study variables. One drawback of the Shao–Wang procedure is that it suffers from an additional variability (called the imputation variance) due to the random selection of residuals, resulting in potentially inefficient estimators. Following Chauvet, Deville, & Haziza 2011 , we propose a fully efficient version of the Shao–Wang procedure that preserves the relationship between two study variables, while virtually eliminating the imputation variance. Results of a simulation study support our findings. An application using data from the Workplace and Employees Survey is also presented. The Canadian Journal of Statistics 40: 124–149; 2012 © 2011 Statistical Society of Canada  相似文献   

14.
Abstract

Mutual information is a measure for investigating the dependence between two random variables. The copula based estimation of mutual information reduces the complexity because it is depend only on the copula density. We propose two estimators and discuss the asymptotic properties. To compare the performance of the estimators a simulation study is carried out. The methods are illustrated using real data sets.  相似文献   

15.
ABSTRACT

In economics and government statistics, aggregated data instead of individual level data are usually reported for data confidentiality and for simplicity. In this paper we develop a method of flexibly estimating the probability density function of the population using aggregated data obtained as group averages when individual level data are grouped according to quantile limits. The kernel density estimator has been commonly applied to such data without taking into account the data aggregation process and has been shown to perform poorly. Our method models the quantile function as an integral of the exponential of a spline function and deduces the density function from the quantile function. We match the aggregated data to their theoretical counterpart using least squares, and regularize the estimation by using the squared second derivatives of the density function as the penalty function. A computational algorithm is developed to implement the method. Application to simulated data and US household income survey data show that our penalized spline estimator can accurately recover the density function of the underlying population while the common use of kernel density estimation is severely biased. The method is applied to study the dynamic of China's urban income distribution using published interval aggregated data of 1985–2010.  相似文献   

16.
Numerous variable selection methods rely on a two-stage procedure, where a sparsity-inducing penalty is used in the first stage to predict the support, which is then conveyed to the second stage for estimation or inference purposes. In this framework, the first stage screens variables to find a set of possibly relevant variables and the second stage operates on this set of candidate variables, to improve estimation accuracy or to assess the uncertainty associated to the selection of variables. We advocate that more information can be conveyed from the first stage to the second one: we use the magnitude of the coefficients estimated in the first stage to define an adaptive penalty that is applied at the second stage. We give the example of an inference procedure that highly benefits from the proposed transfer of information. The procedure is precisely analyzed in a simple setting, and our large-scale experiments empirically demonstrate that actual benefits can be expected in much more general situations, with sensitivity gains ranging from 50 to 100 % compared to state-of-the-art.  相似文献   

17.
Abstract

This paper compares three estimators for periodic autoregressive (PAR) models. The first is the classical periodic Yule-Walker estimator (YWE). The second is a robust version of YWE (RYWE) which uses the robust autocovariance function in the periodic Yule-Walker equations, and the third is the robust least squares estimator (RLSE) based on iterative least squares with robust versions of the original time series. The daily mean particulate matter concentration (PM10) data is used to illustrate the methodologies in a real application, that is, in the Air Quality area.  相似文献   

18.
Abstract

In this paper, we perform the analysis of the SUR Tobit model for three left-censored dependent variables by modeling its nonlinear dependence structure through the one-parameter Clayton copula. For unbiased parameter estimation, we propose an extension of the Inference Function for Augmented Margins (IFAM) method to the trivariate case. The interval estimation for the model parameters using resampling procedures is also discussed. We perform simulation and empirical studies, whose satisfactory results indicate the good performance of the proposed model and methods. Our procedure is illustrated using real data on consumption of food items (salad dressings, lettuce, tomato) by Americans.  相似文献   

19.
Given a number of record values from independent and identically distributed random variables with a continuous distribution function F, our aim is to predict future record values under suitable assumptions on the tail of F. In this paper, we are primarily concerned with finding reasonable tolerance regions for future record values. Two methods are proposed. The first one deals with the case where we observe only record values. The second one makes use of the information brought by the complete sample.  相似文献   

20.
This article proposes a new mixed variable lot-size multiple dependent state sampling plan in which the attribute sampling plan can be used in the first stage and the variables multiple dependent state sampling plan based on the process capability index will be used in the second stage for the inspection of measurable quality characteristics. The proposed mixed plan is developed for both symmetric and asymmetric fraction non conforming. The optimal plan parameters can be determined by considering the satisfaction levels of the producer and the consumer simultaneously at an acceptable quality level and a limiting quality level, respectively. The performance of the proposed plan over the mixed single sampling plan based on Cpk and the mixed variable lot size plan based on Cpk with respect to the average sample number is also investigated. Tables are constructed for easy selection of plan parameters for both symmetric and asymmetric fraction non conforming and real world examples are also given for the illustration and practical implementation of the proposed mixed variable lot-size plan.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号