首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present article intends to develop some imputation methods to reduce the impact of non response at both the occasions in two-occasion successive (rotation) sampling. Utilizing the auxiliary information, which is only available at the current occasion, estimators have been proposed for estimating the population mean at the current occasion. Estimators for the current occasion are also derived as a particular case when there is non response either on the first occasion or second occasion. Behaviors of the proposed estimators are studied and their respective optimum replacement policies are also discussed. To study the effectiveness of the suggested imputation methods, performances of the proposed estimators are compared in two different situations, with and without non response. The results obtained are demonstrated with the help of empirical studies.  相似文献   

2.
Summary.  We consider the general problem of simultaneously monitoring multiple series of counts, applied in this case to methicillin resistant Staphylococcus aureus (MRSA) reports in 173 UK National Health Service acute trusts. Both within-trust changes from baseline ('local monitors') and overall divergence from the bulk of trusts ('relative monitors') are considered. After standardizing for type of trust and overall trend, a transformation to approximate normality is adopted and empirical Bayes shrinkage methods are used for estimating an appropriate baseline for each trust. Shewhart, exponentially weighted moving average and cumulative sum charts are then set up for both local and relative monitors: the current state of each is summarized by a p -value, which is processed by a signalling procedure that controls the false discovery rate. The performance of these methods is illustrated by using 4.5 years of MRSA data, and the appropriate use of such methods in practice is discussed.  相似文献   

3.
Drug developers are required to demonstrate substantial evidence of effectiveness through the conduct of adequate and well‐controlled (A&WC) studies to obtain marketing approval of their medicine. What constitutes A&WC is interpreted as the conduct of randomized controlled trials (RCTs). However, these trials are sometimes unfeasible because of their size, duration, and cost. One way to reduce sample size is to leverage information on the control through a prior. One consideration when forming data‐driven prior is the consistency of the external and the current data. It is essential to make this process less susceptible to choosing information that only helps improve the chances toward making an effectiveness claim. For this purpose, propensity score methods are employed for two reasons: (1) it gives the probability of a patient to be in the trial, and (2) it minimizes selection bias by pairing together treatment and control within the trial and control subjects in the external data that are similar in terms of their pretreatment characteristics. Two matching schemes based on propensity scores, estimated through generalized boosted methods, are applied to a real example with the objective of using external data to perform Bayesian augmented control in a trial where the allocation is disproportionate. The simulation results show that the data augmentation process prevents prior and data conflict and improves the precision of the estimator of the average treatment effect.  相似文献   

4.
Three nonparametric methods for estimating a change-point, and the mean function Pefore and after the change has occurred are developed for a restricted class of processes. The estimators which are developed are intuitive, and their asymptotic behavior is studied, Konte Cario cornparisoris are undertaKen for smali and moderate samples.  相似文献   

5.
This article is concerned with the effect of the methods for handling missing values in multivariate control charts. We discuss the complete case, mean substitution, regression, stochastic regression, and the expectation–maximization algorithm methods for handling missing values. Estimates of mean vector and variance–covariance matrix from the treated data set are used to build the multivariate exponentially weighted moving average (MEWMA) control chart. Based on a Monte Carlo simulation study, the performance of each of the five methods is investigated in terms of its ability to obtain the nominal in-control and out-of-control average run length (ARL). We consider three sample sizes, five levels of the percentage of missing values, and three types of variable numbers. Our simulation results show that imputation methods produce better performance than case deletion methods. The regression-based imputation methods have the best overall performance among all the competing methods.  相似文献   

6.
This article investigates three procedures on the multiple comparisons with a control in the presence of unequal error variances. The advantages of the proposed methods are illustrated through two examples. The performance of the proposed methods and other alternative methods is compared by simulation studies. The results show that the typical methods assuming equal variance will have inflated error rate and may lead to erroneous inference when the equal variance assumption fails. In addition, the simulation study shows that the proposed approaches always control the family-wise error rate at a specified nominal level α, while some established methods are liberal and have inflated error rate in some scenarios.  相似文献   

7.
In the past, most comparison to control problems have dealt with comparing k test treatments to either positive or negative controls. Dasgupta et al. [2006. Using numerical methods to find the least favorable configuration when comparing k test treatments to both positive and negative controls. Journal of Statistical Computation and Simulation 76, 251–265] enumerate situations where it is imperative to compare several test treatments to both a negative as well as a positive control simultaneously. Specifically, the aim is to see if the test treatments are worse than the negative control, or if they are better than the positive control when the two controls are sufficiently apart. To find critical regions for this problem, one needs to find the least favorable configuration (LFC) under the composite null. In their paper, Dasgupta et al. [2006. Using numerical methods to find the least favorable configuration when comparing k test treatments to both positive and negative controls. Journal of Statistical Computation and Simulation 76, 251–265] came up with a numerical technique to find the LFC. In this paper we verify their result analytically. Via Monte Carlo simulation we compare the proposed method to the logical single step alternatives: Dunnett's [1955. A multiple comparison procedure for comparing several treatments with a control. Journal of the American Statistical Association 50, 1096–1121] or the Bonferroni correction. The proposed method is superior in terms of both the Type I error and the marginal power.  相似文献   

8.
Since the 1920' sanduntil recently, numerical computation has beena limiting factor in the application of statistical methods to the improvement of product quality. This restriction is be in geliminated by the in troduction of computer tools for statistical quality control.

Both novice and expertusers of statistical methods can ben-efitsubstantially from the availability o intergrated soft ware for computing, graphics, and data management. Theuse of such soft ware in SQC training program senables the student to focus on the under standing of statistical techniques, rather than the irmechanicaldetails. In productionen vironments, properly designedinterfaces facilitated at aent ryand access to statistical soft ware by plant personnel, with out requiring know ledge of a computer language. The sesame tool scan be used by management to retrieve in for mationandob tain summaries and display sofcriti-caldata gatheredover different period softime. Finally, computer tool sprovide the applied statistician with agreater range of advanced methods, includinganalytical and graphical extensions of the traditional She whart control chart.  相似文献   

9.
Statistical Classification Methods in Consumer Credit Scoring: a Review   总被引:11,自引:0,他引:11  
Credit scoring is the term used to describe formal statistical methods used for classifying applicants for credit into 'good' and 'bad' risk classes. Such methods have become increasingly important with the dramatic growth in consumer credit in recent years. A wide range of statistical methods has been applied, though the literature available to the public is limited for reasons of commercial confidentiality. Particular problems arising in the credit scoring context are examined and the statistical methods which have been applied are reviewed.  相似文献   

10.
In recent years, statistical profile monitoring has emerged as a relatively new and potentially useful subarea of statistical process control and has attracted attention of many researchers and practitioners. A profile, waveform, or signature is a function that relates a dependent or a response variable to one or more independent variables. Different statistical methods have been proposed by researchers to monitor profiles where each method requires its own assumptions. One of the common and implicit assumptions in most of the proposed procedures is the assumption of independent residuals. Violation of this assumption can affect the performance of control procedures and ultimately leading to misleading results. In this article, we study phase II analysis of monitoring multivariate simple linear profiles when the independency assumption is violated. Three time series based methods are proposed to eliminate the effect of correlation that exists between multivariate profiles. Performances of the proposed methods are evaluated using average run length (ARL) criterion. Numerical results indicate satisfactory performance for the proposed methods. A simulated example is also used to show the application of the proposed methods.  相似文献   

11.
Summary.  We review some prospective scan-based methods that are used in health-related applications to detect increased rates of mortality or morbidity and to detect bioterrorism or active clusters of disease. We relate these methods to the use of the moving average chart in industrial applications. Issues that are related to the performance evaluation of spatiotemporal scan-based methods are discussed. In particular we clarify the definition of a recurrence interval and demonstrate that this measure does not reflect some important aspects of the statistical performance of scan-based, and other, surveillance methods. Some research needs in this area are given.  相似文献   

12.
The process of serially dependent counts with deflation or inflation of zeros is commonly observed in many applications. This paper investigates the monitoring of such a process, the first-order zero-modified geometric integer-valued autoregressive process (ZMGINAR(1)). In particular, two control charts, the upper-sided and lower-sided CUSUM charts, are developed to detect the shifts in the mean process of the ZMGINAR(1). Both the average run length performance and the standard deviation of the run length performance of these two charts are investigated by using Markov chain approaches. Also, an extensive simulation is conducted to assess the effectiveness or performance of the charts, and the presented methods are applied to two sets of real data arising from a study on the drug use.  相似文献   

13.
This article discusses methodology for constructing control charts to monitor the percentiles of a Weibull process with known shape parameter. Periodic samples are censored at the smallest observed value. Charts with alarm and warning limits are studied, and these limits are derived using theoretical results based on the first-order statistic. The performance of the proposed charts is evaluated and compared using average run lengths. A numerical application concerning life tests of an electronic product is presented to illustrate the methods.  相似文献   

14.
Statistical disclosure control (SDC) is a balancing act between mandatory data protection and the comprehensible demand from researchers for access to original data. In this paper, a family of methods is defined to ‘mask’ sensitive variables before data files can be released. In the first step, the variable to be masked is ‘cloned’ (C). Then, the duplicated variable as a whole or just a part of it is ‘suppressed’ (S). The masking procedure's third step ‘imputes’ (I) data for these artificial missings. Then, the original variable can be deleted and its masked substitute has to serve as the basis for the analysis of data. The idea of this general ‘CSI framework’ is to open the wide field of imputation methods for SDC. The method applied in the I-step can make use of available auxiliary variables including the original variable. Different members of this family of methods delivering variance estimators are discussed in some detail. Furthermore, a simulation study analyzes various methods belonging to the family with respect to both, the quality of parameter estimation and privacy protection. Based on the results obtained, recommendations are formulated for different estimation tasks.  相似文献   

15.
When recruitment into a clinical trial is limited due to rarity of the disease of interest, or when recruitment to the control arm is limited due to ethical reasons (eg, pediatric studies or important unmet medical need), exploiting historical controls to augment the prospectively collected database can be an attractive option. Statistical methods for combining historical data with randomized data, while accounting for the incompatibility between the two, have been recently proposed and remain an active field of research. The current literature is lacking a rigorous comparison between methods but also guidelines about their use in practice. In this paper, we compare the existing methods based on a confirmatory phase III study design exercise done for a new antibacterial therapy with a binary endpoint and a single historical dataset. A procedure to assess the relative performance of the different methods for borrowing information from historical control data is proposed, and practical questions related to the selection and implementation of methods are discussed. Based on our examination, we found that the methods have a comparable performance, but we recommend the robust mixture prior for its ease of implementation.  相似文献   

16.
Weighted methods are an important feature of multiplicity control methods. The weights must usually be chosen a priori, on the basis of experimental hypotheses. Under some conditions, however, they can be chosen making use of information from the data (therefore a posteriori) while maintaining multiplicity control. In this paper we provide: (1) a review of weighted methods for familywise type I error rate (FWE) (both parametric and nonparametric) and false discovery rate (FDR) control; (2) a review of data-driven weighted methods for FWE control; (3) a new proposal for weighted FDR control (data-driven weights) under independence among variables; (4) under any type of dependence; (5) a simulation study that assesses the performance of procedure of point 4 under various conditions.  相似文献   

17.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods.  相似文献   

18.
Srivastava and Wu and Box and Kramer considered an integrated moving average process of order one with sampling interval for process adjustment. However, the results were obtained by asymptotic methods and simulations respectively. In this paper, these results are obtained analytically. It is assumed that there is a sampling cost and an adjustment cost. The cost of deviating from the target-value is assumed to be proportional to the square of the deviations. The long-run average cost is evaluated exactly in terms of moments of the randomly stopped random walk. Two approximations are given and shown by simulation to be close to the exact value One of these approximations is used to obtain an explicit expression for the optimum value of the inspection interval and the control limit where an adjustment is to be made.  相似文献   

19.
Using Markov chain representations, we evaluate and compare the performance of cumulative sum (CUSUM) and Shiryayev–Roberts methods in terms of the zero- and steady-state average run length and worst-case signal resistance measures. We also calculate the signal resistance values from the worst- to the best-case scenarios for both the methods. Our results support the recommendation that Shewhart limits be used with CUSUM and Shiryayev–Roberts methods, especially for low values of the size of the shift in the process mean for which the methods are designed to detect optimally.  相似文献   

20.
In some industrial applications, the quality of a process or product is characterized by a relationship between the response variable and one or more independent variables which is called as profile. There are many approaches for monitoring different types of profiles in the literature. Most researchers assume that the response variable follows a normal distribution. However, this assumption may be violated in many cases. The most likely situation is when the response variable follows a distribution from generalized linear models (GLMs). For example, when the response variable is the number of defects in a certain area of a product, the observations follow Poisson distribution and ignoring this fact will cause misleading results. In this paper, three methods including a T2-based method, likelihood ratio test (LRT) method and F method are developed and modified in order to be applied in monitoring GLM regression profiles in Phase I. The performance of the proposed methods is analysed and compared for the special case that the response variable follows Poisson distribution. A simulation study is done regarding the probability of the signal criterion. Results show that the LRT method performs better than two other methods and the F method performs better than the T2-based method in detecting either small or large step shifts as well as drifts. Moreover, the F method performs better than the other two methods, and the LRT method performs poor in comparison with the F and T2-based methods in detecting outliers. A real case, in which the size and number of agglomerates ejected from a volcano in successive days form the GLM profile, is illustrated and the proposed methods are applied to determine whether the number of agglomerates of each size is under statistical control or not. Results showed that the proposed methods could handle the mentioned situation and distinguish the out-of-control conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号