首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The problem of testing whether there is a change in location in a sequence of random variables that are taken over time was discussed in several papers. In this paper we develop a conservative nonparametric distribution - free confidence bound for the amount of shift and give some Monte Carlo results to show how conservative the bound is.  相似文献   

2.
In this article, we propose a biweight approach to a real-life location problem, namely, the estimation of a realistic exchange rate for the Nigerian currency, naira (for easy reference, we denote the exchange rate parameter byθ).

Our proposal is essentially a critic of the methods being used by the Central Bank of Nigeria (CBN) to derive its estimate θCBN of θ. The CBN generates the necessary data by periodically organizing a foreign exchange market (FEM) where it sells a certain amount of US dollars to authorized foreign exchange dealers. (The amount of dollars available for sale is usually inadequate to meet aggregate demand, so there is literally a 'scramble' among dealers for a 'slice of the cake'.) During each session of FEM, each dealer quotes: (a) how much naira (variable Y) it will pay for US$1, and (b) the amount of US dollars (variable X) it wants to buy. The CBN estimates, based on observations of Y, have been found to be unstable and part of the problem seems to lie with the fact that a few atypical or outlier values are generated at FEM sessions and CBN estimation methods are not resistant to these extreme values.

This article presents a robust/resistant model which is designed to tackle the problem of outliers head on: we exploit the resistance property of the biweight to help reduce the influence of any outlier on the final biweight estimate θbw. Furthermore, we use the biweight weight, in conjunction with X, as an instrument to check against generation of outliers at FEM.  相似文献   

3.
As a robust method against model deviation we consider a pre-test estimation function. To optimize a continuous design for this problem we give an asymptotic risk matrix for the quadratic loss. The risk will then be given by an isotonic criterion function of the asymptotic risk matrix. As an optimization criterion we look for a design that minimizes the maximal risk in the deviation model under the restriction that the risk in the original model does not exceed a given bound. This optimization problem will be solved for the polynomial regression, the deviation consisting in one additional regression function and the criterion function being the determinant.  相似文献   

4.
Madan L Puri  Vlncze i 《Statistics》2013,47(4):405-506
In this paper we investigate the problem of deriving the C-F-R (CRAMER-FRECHET-RAO) bound for the variance of an unbiased estimator of the translation para¬meter for a class of distributions having as support an interval of fixed length. Starting with the general form of the O-F-R, inequality studied earlier by VINCZE (1979) for mixed

densities, we prove some inequalities related to the information quantity occurring in the C-F-R bound. The case when the variance of the unbiased estimator does not depend upon the translation parameter is investigated. The case when the variance depends upon the translation parameter is also briefly discussed. Finally some remarks will be given

concerning the attainability of the variance,bounds given in this paper  相似文献   

5.
The problem of characterizing a distribution by its moments dates to work by Chebyshev in the mid-nineteenth century. There are clear (and close) connections with characteristic functions, moment spaces, quadrature, and other very classical mathematical pursuits. Lindsay and Basak posed the specific question of how far from normality could a distribution be if it matches k normal moments. They provided a bound on the maximal difference in cdfs, and implied that these bounds were attained. It will be shown here that in fact the bound is not attained if the number of even moments matched is odd. An explicit solution is developed as a symmetric distribution with a finite number of mass points when the number of even moments matched is even, and this bound for the even case is shown to hold as an explicit limit for the subsequent odd case. As Lindsay noted, the discrepancies can be sizable even for a moderate number of matched moments. Some comments on implications are proffered.  相似文献   

6.
A multiple decision approach to the problem of selecting the population with the largest mean was formulated by Bechhofer (1954), where a single-sample solution was presented for the case of normal populations with known variances. In this paper the problem of selecting the normal population with the largest mean is considered when the population variances are unequal and unknown but are constrained only to be less than a specified upper bound. It is demonstrated that a slight modification of Bechhofer' s procedure will suffice to ensure the probability requirements under this simple constraint for cases of practical interest.  相似文献   

7.
Abstract.  In the analysis of clustered and/or longitudinal data, it is usually desirable to ignore covariate information for other cluster members as well as future covariate information when predicting outcome for a given subject at a given time. This can be accomplished through con-ditional mean models which merely condition on the considered subject's covariate history at each time. Pepe & Anderson (Commun. Stat. Simul. Comput. 23, 1994 , 939) have shown that ordinary generalized estimating equations may yield biased estimates for the parameters in such models, but that valid inferences can be guaranteed by using a diagonal working covariance matrix in the equations. In this paper, we provide insight into the nature of this problem by uncovering substantive data-generating mechanisms under which such biases will result. We then propose a class of asymptotically unbiased estimators for the parameters indexing the suggested conditional mean models. In addition, we provide a representation for the efficient estimator in our class, which attains the semi-parametric efficiency bound under the model, along with an efficient algorithm for calculating it. This algorithm is easy to apply and may realize major efficiency improvements as demonstrated through simulation studies. The results suggest ways to improve the efficiency of inverse-probability-of-treatment estimators which adjust for time-varying confounding, and are used to estimate the effect of discontinuing highly active anti-retroviral therapy (HAART) on viral load in HIV-infected patients.  相似文献   

8.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

9.
In this paper, we give a lower bound for the number of treatments required

for a plan to be a main effect plus one plan for 2m (m = 6) factorial experiments, The lower bound problem is important in the event of generating new designs with similar properties or when one wants to study the criteria of optimality for such designs.  相似文献   

10.
We consider the problem of variables selection and estimation in linear regression model in situations where the number of parameters diverges with the sample size. We propose the adaptive Generalized Ridge-Lasso (\mboxAdaGril) which is an extension of the the adaptive Elastic Net. AdaGril incorporates information redundancy among correlated variables for model selection and estimation. It combines the strengths of the quadratic regularization and the adaptively weighted Lasso shrinkage. In this article, we highlight the grouped selection property for AdaCnet method (one type of AdaGril) in the equal correlation case. Under weak conditions, we establish the oracle property of AdaGril which ensures the optimal large performance when the dimension is high. Consequently, it achieves both goals of handling the problem of collinearity in high dimension and enjoys the oracle property. Moreover, we show that AdaGril estimator achieves a Sparsity Inequality, i.e., a bound in terms of the number of non-zero components of the “true” regression coefficient. This bound is obtained under a similar weak Restricted Eigenvalue (RE) condition used for Lasso. Simulations studies show that some particular cases of AdaGril outperform its competitors.  相似文献   

11.
The problem of estimation of the total weight or objects using a spring balance weighing design has been deait with in this paper Based on a theorem by Dey and Gupta (1977) giving a lower bound for the variance of the estimated total weight, a necessary and sufficient condition for this lower bound to be attained is obtained. A few special cases where the lower bound is attained are enumerated.  相似文献   

12.
Analyzing repeated difference tests aims in significance testing for differences as well as in estimating the mean discrimination ability of the consumers. In addition to the average success probability, the proportion of consumers that may detect the difference between two products and therefore account for any increase of this probability is of interest. While some authors address the first two goals, for the latter one only an estimator directly linked to the average probability seems to be used. However, this may lead to unreasonable results. Therefore we propose a new approach based on multiple test theory. We define a suitable set of hypotheses that is closed under intersection. From this, we derive a series of hypotheses that may be sequentially tested while the overall significance level will not be violated. By means of this procedure we may determine a minimal number of assessors that must have perceived the difference between the products at least once in a while. From this, we can find a conservative lower bound for the proportion of perceivers within the consumers. In several examples, we give some insight into the properties of this new method and show that the knowledge about this lower bound might indeed be valuable for the investigator. Finally, an adaption of this approach for similarity tests will be proposed.  相似文献   

13.
We consider Gaussian mixtures and particularly the problem of testing homogeneity, that is testing no mixture, against a mixture with two components. Seven distinct cases are addressed, corresponding to the possible restrictions on the parameters. For each case, we give a statistic that we claim to be the likelihood ratio test statistic. The proof is given in a simple case. With the help of a bound for the maximum of a Gaussian process we calculate the percentile points. The results are illustrated by simulation.  相似文献   

14.
In practice, different practitioners will use different Phase I samples to estimate the process parameters, which will lead to different Phase II control chart's performance. Researches refer to this variability as between-practitioners-variability of control charts. Since between-practitioners-variability is important in the design of the CUSUM median chart with estimated process parameters, the standard deviation of average run length (SDARL) will be used to study its properties. It is shown that the CUSUM median chart requires a larger amount of Phase I samples to sufficiently reduce the variation in the in-control ARL of the CUSUM median chart. Considering the limitation of the amount of the Phase I samples, a bootstrap approach is also used here to adjust the control limits of the CUSUM median chart. Comparisons are made for the CUSUM and Shewhart median charts with estimated parameters when using the adjusted- and unadjusted control limits and some conclusions are made.  相似文献   

15.
When testing a hypothesis with a nuisance parameter present only under the alternative, the maximum of a test statistic over the nuisance parameter space has been proposed. Different upper bounds for the one-sided tail probabilities of the maximum tests were provided. Davies (1977. Biometrika 64, 247–254) studied the problem when the parameter space is an interval, while Efron (1997. Biometrika 84, 143–157) considered the problem with some finite points of the parameter space and obtained a W-formula. We study the limiting bound of Efron's W-formula when the number of points in the parameter space goes to infinity. The conditions under which the limiting bound of the W-formula is identical to that of Davies are given. The results are also extended to two-sided tests. Examples are used to illustrate the conditions, including case-control genetic association studies. Efficient calculations of upper bounds for the tail probability with finite points in the parameter space are described.  相似文献   

16.
Statistical decision theory can sometimes be used to find, via a least favourable prior distribution, a statistical procedure that attains the minimax risk. This theory also provides, using an ‘unfavourable prior distribution’, a very useful lower bound on the minimax risk. In the late 1980s, Kempthorne showed how, using a least favourable prior distribution, a specified integrated risk can sometimes be minimised, subject to an inequality constraint on a different risk. Specifically, he was concerned with the solution of a minimax‐Bayes compromise problem (‘compromise decision theory’). Using an unfavourable prior distribution, Kabaila & Tuck ( 2008 ), provided a very useful lower bound on an integrated risk, subject to an inequality constraint on a different risk. We extend this result to the case of multiple inequality constraints on specified risk functions and integrated risks. We also describe a new and very effective method for the computation of an unfavourable prior distribution that leads to a very useful lower bound. This method is simply to maximize the lower bound directly with respect to the unfavourable prior distribution. Not only does this method result in a relatively tight lower bound, it is also fast because it avoids the repeated computation of the global maximum of a function with multiple local maxima. The advantages of this computational method are illustrated using the problems of bounding the performance of a point estimator of (i) the multivariate normal mean and (ii) the univariate normal mean.  相似文献   

17.
Limit expressions (as the dimension p ← ∞ ) are derived for the relative risk of the James-Stein estimator and its positive-part version. The limit is simple to evaluate, and gives the amount of improvement in risk that is possible. The technique used is to bound the risk, both above and below. with bounds that converge to the same limit. For the James-Stein estimator these bounds are simple to calculate, and are quite accurate even for moderate dimensions.  相似文献   

18.
The purpose of this paper is to highlight some classic issues in the measurement of change and to show how contemporary solutions can be used to deal with some of these issues. Five classic issues will be raised here: (1) Separating individual changes from group differences; (2) options for incomplete longitudinal data over time, (3) options for nonlinear changes over time; (4) measurement invariance in studies of changes over time; and (5) new opportunities for modeling dynamic changes. For each issue we will describe the problem, and then review some contemporary solutions to these problems base on Structural Equation Models (SEM). We will fit these SEM to using existing panel data from the Health & Retirement Study (HRS) cognitive variables. This is not intended as an overly technical treatment, so only a few basic equations are presented, examples will be displayed graphically, and more complete references to the contemporary solutions will be given throughout.  相似文献   

19.
In many clinical trials, biological, pharmacological, or clinical information is used to define candidate subgroups of patients that might have a differential treatment effect. Once the trial results are available, interest will focus on subgroups with an increased treatment effect. Estimating a treatment effect for these groups, together with an adequate uncertainty statement is challenging, owing to the resulting “random high” / selection bias. In this paper, we will investigate Bayesian model averaging to address this problem. The general motivation for the use of model averaging is to realize that subgroup selection can be viewed as model selection, so that methods to deal with model selection uncertainty, such as model averaging, can be used also in this setting. Simulations are used to evaluate the performance of the proposed approach. We illustrate it on an example early‐phase clinical trial.  相似文献   

20.
In this paper, we propose a lower bound based smoothed quasi-Newton algorithm for computing the solution paths of the group bridge estimator in linear regression models. Our method is based on the quasi-Newton algorithm with a smoothed group bridge penalty in combination with a novel data-driven thresholding rule for the regression coefficients. This rule is derived based on a necessary KKT condition of the group bridge optimization problem. It is easy to implement and can be used to eliminate groups with zero coefficients. Thus, it reduces the dimension of the optimization problem. The proposed algorithm removes the restriction of groupwise orthogonal condition needed in coordinate descent and LARS algorithms for group variable selection. Numerical results show that the proposed algorithm outperforms the coordinate descent based algorithms in both efficiency and accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号