首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Optimal Predictive Tests   总被引:1,自引:1,他引:0  
  相似文献   

2.
We propose tests for parameter constancy in the time series direction in panel data models. We construct a locally best invariant test based on Tanaka [Time series analysis: nonstationary and noninvertible distribution theory. New York: Wiley; 1996] and an asymptotically point optimal test based on Elliott and Müller [Efficient tests for general persistent time variation in regression coefficients. Rev Econ Stud. 2006;73:907–940]. We derive the limiting distributions of the test statistics as T→∞ while N is fixed, and calculate the critical values by applying numerical integration and response surface regression. Simulation results show that the proposed tests perform well if we apply them appropriately.  相似文献   

3.
In this paper we propose a series of goodness-of-fit tests for the family of skew-normal models when all parameters are unknown. As the null distributions of the considered test statistics depend only on asymmetry parameter, we used a default and proper prior on skewness parameter leading to the prior predictive p-value advocated by G. Box. Goodness-of-fit tests, here proposed, depend only on sample size and exhibit full agreement between nominal and actual size. They also have good power against local alternative models which also account for asymmetry in the data.  相似文献   

4.
Estimation of the parameters of an exponential distribution based on record data has been treated by Samaniego and Whitaker [On estimating population characteristics from record-breaking observations, I. Parametric results, Naval Res. Logist. Q. 33 (1986), pp. 531–543] and Doostparast [A note on estimation based on record data, Metrika 69 (2009), pp. 69–80]. Recently, Doostparast and Balakrishnan [Optimal record-based statistical procedures for the two-parameter exponential distribution, J. Statist. Comput. Simul. 81(12) (2011), pp. 2003–2019] obtained optimal confidence intervals as well as uniformly most powerful tests for one- and two-sided hypotheses concerning location and scale parameters based on record data from a two-parameter exponential model. In this paper, we derive optimal statistical procedures including point and interval estimation as well as most powerful tests based on record data from a two-parameter Pareto model. For illustrative purpose, a data set on annual wages of a sample of production-line workers in a large industrial firm is analysed using the proposed procedures.  相似文献   

5.
ABSTRACT

This article argues that researchers do not need to completely abandon the p-value, the best-known significance index, but should instead stop using significance levels that do not depend on sample sizes. A testing procedure is developed using a mixture of frequentist and Bayesian tools, with a significance level that is a function of sample size, obtained from a generalized form of the Neyman–Pearson Lemma that minimizes a linear combination of α, the probability of rejecting a true null hypothesis, and β, the probability of failing to reject a false null, instead of fixing α and minimizing β. The resulting hypothesis tests do not violate the Likelihood Principle and do not require any constraints on the dimensionalities of the sample space and parameter space. The procedure includes an ordering of the entire sample space and uses predictive probability (density) functions, allowing for testing of both simple and compound hypotheses. Accessible examples are presented to highlight specific characteristics of the new tests.  相似文献   

6.
Abstract

Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve and surface are useful tools to assess the ability of diagnostic tests to discriminate between ordered classes or groups. To define these diagnostic tests, selecting the optimal thresholds that maximize the accuracy of these tests is required. One procedure that is commonly used to find the optimal thresholds is by maximizing what is known as Youden’s index. This article presents nonparametric predictive inference (NPI) for selecting the optimal thresholds of a diagnostic test. NPI is a frequentist statistical method that is explicitly aimed at using few modeling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. Based on multiple future observations, the NPI approach is presented for selecting the optimal thresholds for two-group and three-group scenarios. In addition, a pairwise approach has also been presented for the three-group scenario. The article ends with an example to illustrate the proposed methods and a simulation study of the predictive performance of the proposed methods along with some classical methods such as Youden index. The NPI-based methods show some interesting results that overcome some of the issues concerning the predictive performance of Youden’s index.  相似文献   

7.
Abstract

In this paper, we consider the preliminary test approach to the estimation of the regression parameter in a multiple regression model under multicollinearity situation. The preliminary test almost unbiased two-parameter estimators based on the Wald, the Likelihood ratio, and the Lagrangian multiplier tests are given, when it is suspected that the regression parameter may be restricted to a subspace and the regression error is distributed with multivariate Student’s t errors. The bias and quadratic risk of the proposed estimators are derived and compared. Furthermore, a Monte Carlo simulation is provided to illustrate some of the theoretical results.  相似文献   

8.
Three test statistics for a change-point in a linear model, variants of those considered by Andrews and Ploberger [Optimal tests when a nusiance parameter is present only under the alternative. Econometrica. 1994;62:1383–1414]: the sup-likelihood ratio (LR) statistic; a weighted average of the exponential of LR-statistics and a weighted average of LR-statistics, are studied. Critical values for the statistics with time trend regressors, obtained via simulation, are found to vary considerably, depending on conditions on the error terms. The performance of the bootstrap in approximating p-values of the distributions is assessed in a simulation study. A sample approximation to asymptotic analytical expressions extending those of Kim and Siegmund [The likelihood ratio test for a change-point in simple linear regression. Biometrika. 1989;76:409–423] in the case of the sup-LR test is also assessed. The approximations and bootstrap are applied to the Quandt data [The estimation of a parameter of a linear regression system obeying two separate regimes. J Amer Statist Assoc. 1958;53:873–880] and real data concerning a change-point in oxygen uptake during incremental exercise testing and the bootstrap gives reasonable results.  相似文献   

9.
We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1–39.], and (ii) an approximation to the one proposed by Barndorff–Nielsen [Barndorff–Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343–365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33–53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655–661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff–Nielsen's adjustment.  相似文献   

10.
In this article, we introduce two goodness-of-fit tests for testing normality through the concept of the posterior predictive p-value. The discrepancy variables selected are the Kolmogorov-Smirnov (KS) and Berk-Jones (BJ) statistics and the prior chosen is Jeffreys’ prior. The constructed posterior predictive p-values are shown to be distributed independently of the unknown parameters under the null hypothesis, thus they can be taken as the test statistics. It emerges from the simulation that the new tests are more powerful than the corresponding classical tests against most of the alternatives concerned.  相似文献   

11.
In partly linear models, the dependence of the response y on (x T, t) is modeled through the relationship y=x T β+g(t)+?, where ? is independent of (x T, t). We are interested in developing an estimation procedure that allows us to combine the flexibility of the partly linear models, studied by several authors, but including some variables that belong to a non-Euclidean space. The motivating application of this paper deals with the explanation of the atmospheric SO2 pollution incidents using these models when some of the predictive variables belong in a cylinder. In this paper, the estimators of β and g are constructed when the explanatory variables t take values on a Riemannian manifold and the asymptotic properties of the proposed estimators are obtained under suitable conditions. We illustrate the use of this estimation approach using an environmental data set and we explore the performance of the estimators through a simulation study.  相似文献   

12.
A class of “optimal”U-statistics type nonparametric test statistics is proposed for the one-sample location problem by considering a kernel depending on a constant a and all possible (distinct) subsamples of size two from a sample of n independent and identically distributed observations. The “optimal” choice of a is determined by the underlying distribution. The proposed class includes the Sign and the modified Wilcoxon signed-rank statistics as special cases. It is shown that any “optimal” member of the class performs better in terms of Pitman efficiency relative to the Sign and Wilcoxon-signed rank statistics. The effect of deviation of chosen a from the “optimal” a on Pitman efficiency is also examined. A Hodges-Lehmann type point estimator of the location parameter corresponding to the proposed “optimal” test-statistics is also defined and studied in this paper.  相似文献   

13.
Optimal accelerated degradation test (ADT) plans are developed assuming that the constant-stress loading method is employed and the degradation characteristic follows a Wiener process. Unlike the previous works on planning ADTs based on stochastic process models, this article determines the test stress levels and the proportion of test units allocated to each stress level such that the asymptotic variance of the maximum-likelihood estimator of the qth quantile of the lifetime distribution at the use condition is minimized. In addition, compromise plans are also developed for checking the validity of the relationship between the model parameters and the stress variable. Finally, using an example, sensitivity analysis procedures are presented for evaluating the robustness of optimal and compromise plans against the uncertainty in the pre-estimated parameter value, and the importance of optimally determining test stress levels and the proportion of units allocated to each stress level are illustrated.  相似文献   

14.
A combination of a smooth test statistic and (an approximate) Schwarz's selection rule has been proposed by Inglot, T., Kallenberg, W. C. M. and Ledwina, T. ((1997). Data-driven smooth tests for composite hypotheses. Ann. Statist. 25, 1222–1250) as a solution of a standard goodness-of-fit problem when nuisance parameters are present. In the present paper we modify the above solution in the sense that we propose another analogue of Schwarz's rule and rederive properties of it and the resulting test statistic. To avoid technicalities we restrict our attention to location-scale family and method of moments estimators of its parameters. In a parallel paper [Janic-Wróblewska, A. (2004). Data-driven smooth tests for the extreme value distribution. Statistics, in press] we illustrate an application of our solution and advantages of modification when testing of fit to extreme value distribution.  相似文献   

15.
In this paper we present data-driven smooth tests for the extreme value distribution. These tests are based on a general idea of construction of data-driven smooth tests for composite hypotheses introduced by Inglot, T., Kallenberg, W. C. M. and Ledwina, T. [(1997). Data-driven smooth tests for composite hypotheses. Ann. Statist., 25, 1222–1250] and its modification for location-scale family proposed in Janic-Wróblewska, A. [(2004). Data-driven smooth test for a location-scale family. Statistics, in press]. Results of power simulations show that the newly introduced test performs very well for a wide range of alternatives and is competitive with other commonly used tests for the extreme value distribution.  相似文献   

16.
17.
Nonparametric regression models are often used to check or suggest a parametric model. Several methods have been proposed to test the hypothesis of a parametric regression function against an alternative smoothing spline model. Some tests such as the locally most powerful (LMP) test by Cox et al. (Cox, D., Koh, E., Wahba, G. and Yandell, B. (1988). Testing the (parametric) null model hypothesis in (semiparametric) partial and generalized spline models. Ann. Stat., 16, 113–119.), the generalized maximum likelihood (GML) ratio test and the generalized cross validation (GCV) test by Wahba (Wahba, G. (1990). Spline models for observational data. CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM.) were developed from the corresponding Bayesian models. Their frequentist properties have not been studied. We conduct simulations to evaluate and compare finite sample performances. Simulation results show that the performances of these tests depend on the shape of the true function. The LMP and GML tests are more powerful for low frequency functions while the GCV test is more powerful for high frequency functions. For all test statistics, distributions under the null hypothesis are complicated. Computationally intensive Monte Carlo methods can be used to calculate null distributions. We also propose approximations to these null distributions and evaluate their performances by simulations.  相似文献   

18.
A minimum cost CUSUM test for an event rate increase when inter-event times are exponentially distributed is presented. Optimal values of the test decision parameters, h and k, are developed from a renewal reward model of the event cycle by combining a non-linear optimization technique with an exact method for determining exponential average run lengths. Test robustness for event cycle parameter estimates and departures from the assumption of exponentially distributed inter-event times are discussed in the context of an injury monitoring scenario. Robustness to positively serially correlated observations emanating from EAR(1) and EMA(1) processes is also examined.  相似文献   

19.
Abstract

Robust parameter design (RPD) is an effective tool, which involves experimental design and strategic modeling to determine the optimal operating conditions of a system. The usual assumptions of RPD are that normally distributed experimental data and no contamination due to outliers. And generally the parameter uncertainties in response models are neglected. However, using normal theory modeling methods for a skewed data and ignoring parameter uncertainties can create a chain of degradation in optimization and production phases such that misleading fit, poor estimated optimal operating conditions, and poor quality products. This article presents a new approach based on confidence interval (CI) response modeling for the process mean. The proposed interval robust design makes the system median unbiased for the mean and uses midpoint of the interval as a measure of location performance response. As an alternative robust estimator for the process variance response modeling, using biweight midvariance is proposed which is both resistant and robust of efficiency where normality is not met. The results further show that the proposed interval robust design gives a robust solution to the skewed structure of the data and to contaminated data. The procedure and its advantages are illustrated using two experimental design studies.  相似文献   

20.
The classical problem of testing treatment versus control is revisited by considering a class of test statistics based on a kernel that depends on a constant ‘a’. The proposed class includes the celebrated Wilcoxon-Mann-Whitnet statistics as a special case when ‘a’=1. It is shown that, with optimal choice of ‘a’ depending on the underlying distribution, the optimal member performs better (in terms of Pitman efficiency) than the Wilcoxon-Mann-Whitney and the Median tests for a wide range of underlying distributions. An extended Hodges-Lehmann type point estimator of the shift prameter corresponding to the proposed ‘optimal’ test statistic is also derived.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号