首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
Parameter design or robust parameter design (RPD) is an engineering methodology intended as a cost-effective approach for improving the quality of products and processes. The goal of parameter design is to choose the levels of the control variables that optimize a defined quality characteristic. An essential component of RPD involves the assumption of well estimated models for the process mean and variance. Traditionally, the modeling of the mean and variance has been done parametrically. It is often the case, particularly when modeling the variance, that nonparametric techniques are more appropriate due to the nature of the curvature in the underlying function. Most response surface experiments involve sparse data. In sparse data situations with unusual curvature in the underlying function, nonparametric techniques often result in estimates with problematic variation whereas their parametric counterparts may result in estimates with problematic bias. We propose the use of semi-parametric modeling within the robust design setting, combining parametric and nonparametric functions to improve the quality of both mean and variance model estimation. The proposed method will be illustrated with an example and simulations.  相似文献   

2.
Outliers that commonly occur in business sample surveys can have large impacts on domain estimates. The authors consider an outlier‐robust design and smooth estimation approach, which can be related to the so‐called “Surprise stratum” technique [Kish, “Survey Sampling,” Wiley, New York (1965)]. The sampling design utilizes a threshold sample consisting of previously observed outliers that are selected with probability one, together with stratified simple random sampling from the rest of the population. The domain predictor is an extension of the Winsorization‐based estimator proposed by Rivest and Hidiroglou [Rivest and Hidiroglou, “Outlier Treatment for Disaggregated Estimates,” in “Proceedings of the Section on Survey Research Methods,” American Statistical Association (2004), pp. 4248–4256], and is similar to the estimator for skewed populations suggested by Fuller [Fuller, Statistica Sinica 1991;1:137–158]. It makes use of a domain Winsorized sample mean plus a domain‐specific adjustment of the estimated overall mean of the excess values on top of that. The methods are studied in theory from a design‐based perspective and by simulations based on the Norwegian Research and Development Survey data. Guidelines for choosing the threshold values are provided. The Canadian Journal of Statistics 39: 147–164; 2011 © 2010 Statistical Society of Canada  相似文献   

3.
4.
This paper proposes a new approach, based on the recent developments of the wavelet theory, to model the dynamic of the exchange rate. First, we consider the maximum overlap discrete wavelet transform (MODWT) to decompose the level exchange rates into several scales. Second, we focus on modelling the conditional mean of the detrended series as well as their volatilities. In particular, we consider the generalized fractional, one-factor, Gegenbauer process (GARMA) to model the conditional mean and the fractionally integrated generalized autoregressive conditional heteroskedasticity process (FIGARCH) to model the conditional variance. Moreover, we estimate the GARMA-FIGARCH model using the wavelet-based maximum likelihood estimator (Whitcher in Technometrics 46:225–238, 2004). To illustrate the usefulness of our methodology, we carry out an empirical application using the daily Tunisian exchange rates relative to the American Dollar, the Euro and the Japanese Yen. The empirical results show the relevance of the selected modelling approach which contributes to a better forecasting performance of the exchange rate series.  相似文献   

5.
We report on an empirical investigation of the modified rescaled adjusted range or R/S statistic that was proposed by Lo, 1991. Econometrica 59, 1279–1313, as a test for long-range dependence with good robustness properties under ‘extra’ short-range dependence. In contrast to the classical R/S statistic that uses the standard deviation S to normalize the rescaled range R, Lo's modified R/S-statistic Vq is normalized by a modified standard deviation Sq which takes into account the covariances of the first q lags, so as to discount the influence of the short-range dependence structure that might be present in the data. Depending on the value of the resulting test-statistic Vq, the null hypothesis of no long-range dependence is either rejected or accepted. By performing Monte-Carlo simulations with ‘truly’ long-range- and short-range dependent time series, we study the behavior of Vq, as a function of q, and uncover a number of serious drawbacks to using Lo's method in practice. For example, we show that as the truncation lag q increases, the test statistic Vq has a strong bias toward accepting the null hypothesis (i.e., no long-range dependence), even in ideal situations of ‘purely’ long-range dependent data.  相似文献   

6.
Estimation of the Pareto tail index from extreme order statistics is an important problem in many settings. The upper tail of the distribution, where data are sparse, is typically fitted with a model, such as the Pareto model, from which quantities such as probabilities associated with extreme events are deduced. The success of this procedure relies heavily not only on the choice of the estimator for the Pareto tail index but also on the procedure used to determine the number k of extreme order statistics that are used for the estimation. The authors develop a robust prediction error criterion for choosing k and estimating the Pareto index. A Monte Carlo study shows the good performance of the new estimator and the analysis of real data sets illustrates that a robust procedure for selection, and not just for estimation, is needed.  相似文献   

7.
Summary. Long-transported air pollution in Europe is monitored by a combination of a highly complex mathematical model and a limited number of measurement stations. The model predicts deposition on a 150 km × 150 km square grid covering the whole of the continent. These predictions can be regarded as spatial averages, with some spatially correlated model error. The measurement stations give a limited number of point estimates, regarded as error free. We combine these two sources of data by assuming that both are observations of an underlying true process. This true deposition is made up of a smooth deterministic trend, due to gradual changes in emissions over space and time, and two stochastic components. One is non- stationary and correlated over long distances; the other describes variation within a grid square. Our approach is through hierarchical modelling with predictions and measurements being independent conditioned on the underlying non-stationary true deposition. We assume Gaussian processes and calculate maximum likelihood estimates through numerical optimization. We find that the variation within a grid square is by far the largest component of the variation in the true deposition. We assume that the mathematical model produces estimates of the mean over an area that is approximately equal to a grid square, and we find that it has an error that is similar to the long-range stochastic component of the true deposition, in addition to a large bias.  相似文献   

8.
An iteratively reweighted approach for robust clustering is presented in this work. The method is initialized with a very robust clustering partition based on an high trimming level. The initial partition is then refined to reduce the number of wrongly discarded observations and substantially increase efficiency. Simulation studies and real data examples indicate that the final clustering solution has both good properties in terms of robustness and efficiency and naturally adapts to the true underlying contamination level.  相似文献   

9.
In this article, a semiparametric time‐varying nonlinear vector autoregressive (NVAR) model is proposed to model nonlinear vector time series data. We consider a combination of parametric and nonparametric estimation approaches to estimate the NVAR function for both independent and dependent errors. We use the multivariate Taylor series expansion of the link function up to the second order which has a parametric framework as a representation of the nonlinear vector regression function. After the unknown parameters are estimated by the maximum likelihood estimation procedure, the obtained NVAR function is adjusted by a nonparametric diagonal matrix, where the proposed adjusted matrix is estimated by the nonparametric kernel estimator. The asymptotic consistency properties of the proposed estimators are established. Simulation studies are conducted to evaluate the performance of the proposed semiparametric method. A real data example on short‐run interest rates and long‐run interest rates of United States Treasury securities is analyzed to demonstrate the application of the proposed approach. The Canadian Journal of Statistics 47: 668–687; 2019 © 2019 Statistical Society of Canada  相似文献   

10.
The author introduces robust techniques for estimation, inference and variable selection in the analysis of longitudinal data. She first addresses the problem of the robust estimation of the regression and nuisance parameters, for which she derives the asymptotic distribution. She uses weighted estimating equations to build robust quasi‐likelihood functions. These functions are then used to construct a class of test statistics for variable selection. She derives the limiting distribution of these tests and shows its robustness properties in terms of stability of the asymptotic level and power under contamination. An application to a real data set allows her to illustrate the benefits of a robust analysis.  相似文献   

11.
A Bayesian nonparametric model for Taguchi's on-line quality monitoring procedure for attributes is introduced. The proposed model may accommodate the original single shift setting to the more realistic situation of gradual quality deterioration and allows the incorporation of an expert's opinion on the production process. Based on the number of inspections to be carried out until a defective item is found, the Bayesian operation for the distribution function that represents the increasing sequence of defective fractions during a cycle considering a mixture of Dirichlet processes as prior distribution is performed. Bayes estimates for relevant quantities are also obtained.  相似文献   

12.
Taguchi's robust design technique, also known as parameter design, focuses on making product and process designs insensitive (i.e., robust) to hard to control variations. In some applications, however, his approach of modeling expected loss and the resulting “product array” experimental format leads to unnecessarily expensive and less informative experiments. The response model approach to robust design proposed by Welch, Ku, Yang, and Sacks (1990), Box and Jones (1990), Lucas (1989), and Shoemaker, Tsui and Wu (1991) offers more flexibility and economy in experiment planning and more informative modeling. This paper develops a formal basis for the graphical data-analytic approach presented in Shoemaker et al. In particular, we decompose overall response variation into components representing the variability contributed by each noise factor, and show when this decomposition allows us to use individual control-by-noise interaction plots to minimize response variation. We then generalize the control-by-noise interaction plots to extend their usefulness, and develop a formal analysis strategy using these plots to minimize response variation.  相似文献   

13.
14.
In this paper, we consider the problem of model robust design for simultaneous parameter estimation among a class of polynomial regression models with degree up to k. A generalized D-optimality criterion, the Ψα‐optimality criterion, first introduced by Läuter (1974) is considered for this problem. By applying the theory of canonical moments and the technique of maximin principle, we derive a model robust optimal design in the sense of having highest minimum Ψα‐efficiency. Numerical comparison indicates that the proposed design has remarkable performance for parameter estimation in all of the considered rival models.  相似文献   

15.
In this article, we propose a robust statistical approach to select an appropriate error distribution, in a classical multiplicative heteroscedastic model. In a first step, unlike to the traditional approach, we do not use any GARCH-type estimation of the conditional variance. Instead, we propose to use a recently developed nonparametric procedure [31 D. Mercurio and V. Spokoiny, Statistical inference for time-inhomogeneous volatility models, Ann. Stat. 32 (2004), pp. 577602.[Crossref], [Web of Science ®] [Google Scholar]]: the local adaptive volatility estimation. The motivation for using this method is to avoid a possible model misspecification for the conditional variance. In a second step, we suggest a set of estimation and model selection procedures (Berk–Jones tests, kernel density-based selection, censored likelihood score, and coverage probability) based on the so-obtained residuals. These methods enable to assess the global fit of a set of distributions as well as to focus on their behaviour in the tails, giving us the capacity to map the strengths and weaknesses of the candidate distributions. A bootstrap procedure is provided to compute the rejection regions in this semiparametric context. Finally, we illustrate our methodology throughout a small simulation study and an application on three time series of daily returns (UBS stock returns, BOVESPA returns and EUR/USD exchange rates).  相似文献   

16.
In dental implant research studies, events such as implant complications including pain or infection may be observed recurrently before failure events, i.e. the death of implants. It is natural to assume that recurrent events and failure events are correlated to each other, since they happen on the same implant (subject) and complication times have strong effects on the implant survival time. On the other hand, each patient may have more than one implant. Therefore these recurrent events or failure events are clustered since implant complication times or failure times within the same patient (cluster) are likely to be correlated. The overall implant survival times and recurrent complication times are both interesting to us. In this paper, a joint modelling approach is proposed for modelling complication events and dental implant survival times simultaneously. The proposed method uses a frailty process to model the correlation within cluster and the correlation within subjects. We use Bayesian methods to obtain estimates of the parameters. Performance of the joint models are shown via simulation studies and data analysis.  相似文献   

17.
Compared with most of the existing phase I designs, the recently proposed calibration-free odds (CFO) design has been demonstrated to be robust, model-free, and easy to use in practice. However, the original CFO design cannot handle late-onset toxicities, which have been commonly encountered in phase I oncology dose-finding trials with targeted agents or immunotherapies. To account for late-onset outcomes, we extend the CFO design to its time-to-event (TITE) version, which inherits the calibration-free and model-free properties. One salient feature of CFO-type designs is to adopt game theory by competing three doses at a time, including the current dose and the two neighboring doses, while interval-based designs only use the data at the current dose and is thus less efficient. We conduct comprehensive numerical studies for the TITE-CFO design under both fixed and randomly generated scenarios. TITE-CFO shows robust and efficient performances compared with interval-based and model-based counterparts. As a conclusion, the TITE-CFO design provides robust, efficient, and easy-to-use alternatives for phase I trials when the toxicity outcome is late-onset.  相似文献   

18.
Directional testing of vector parameters, based on higher order approximations of likelihood theory, can ensure extremely accurate inference, even in high‐dimensional settings where standard first order likelihood results can perform poorly. Here we explore examples of directional inference where the calculations can be simplified, and prove that in several classical situations, the directional test reproduces exact results based on F‐tests. These findings give a new interpretation of some classical results and support the use of directional testing in general models, where exact solutions are typically not available. The Canadian Journal of Statistics 47: 619–627; 2019 © 2019 Statistical Society of Canada  相似文献   

19.
Statistical Methods & Applications - A major challenge when trying to detect fraud is that the fraudulent activities form a minority class which make up a very small proportion of the data set....  相似文献   

20.
Joint models for longitudinal and time-to-event data have been applied in many different fields of statistics and clinical studies. However, the main difficulty these models have to face with is the computational problem. The requirement for numerical integration becomes severe when the dimension of random effects increases. In this paper, a modified two-stage approach has been proposed to estimate the parameters in joint models. In particular, in the first stage, the linear mixed-effects models and best linear unbiased predictorsare applied to estimate parameters in the longitudinal submodel. In the second stage, an approximation of the fully joint log-likelihood is proposed using the estimated the values of these parameters from the longitudinal submodel. Survival parameters are estimated bymaximizing the approximation of the fully joint log-likelihood. Simulation studies show that the approach performs well, especially when the dimension of random effects increases. Finally, we implement this approach on AIDS data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号