首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 995 毫秒
1.
Staudte  R.G.  Zhang  J. 《Lifetime data analysis》1997,3(4):383-398
The p-value evidence for an alternative to a null hypothesis regarding the mean lifetime can be unreliable if based on asymptotic approximations when there is only a small sample of right-censored exponential data. However, a guarded weight of evidence for the alternative can always be obtained without approximation, no matter how small the sample, and has some other advantages over p-values. Weights of evidence are defined as estimators of 0 when the null hypothesis is true and 1 when the alternative is true, and they are judged on the basis of the ensuing risks, where risk is mean squared error of estimation. The evidence is guarded in that a preassigned bound is placed on the risk under the hypothesis. Practical suggestions are given for choosing the bound and for interpreting the magnitude of the weight of evidence. Acceptability profiles are obtained by inversion of a family of guarded weights of evidence for two-sided alternatives to point hypotheses, just as confidence intervals are obtained from tests; these profiles are arguably more informative than confidence intervals, and are easily determined for any level and any sample size, however small. They can help understand the effects of different amounts of censoring. They are found for several small size data sets, including a sample of size 12 for post-operative cancer patients. Both singly Type I and Type II censored examples are included. An examination of the risk functions of these guarded weights of evidence suggests that if the censoring time is of the same magnitude as the mean lifetime, or larger, then the risks in using a guarded weight of evidence based on a likelihood ratio are not much larger than they would be if the parameter were known.  相似文献   

2.
A test for a hypothesized parameter is generalized by replacing the indicator function of the test critical region with a function ('weight of evidence for the alternative') having values in [0,1] and estimating the value 1 when the alternative is true and 0 otherwise. It is a 'guarded' weight of evidence if a bound is placed on the Type I risk. The focus of this paper is on a guarded weight of evidence which is a function of the likelihood ratio of the sign statistic for a two-sided alternative to a point hypothesis regarding the centre of a symmetric distribution. Inversion of a family of such guarded weights of evidence yields an 'acceptability profile' for the median which is more informative than the traditional confidence interval for the median. The main results, with the exception of the comparison of the Type II risks with an envelope risk, are based entirely on permutation arguments.  相似文献   

3.
Estimation of regression parameters in linear survival models is considered in the clustered data setting. One step updates from an initial consistent estimator are proposed. The updates are based on scores that are functions of ranks of the residuals, and that incorporate weight matrices to improve efficiency. Optimal weights are approximated as the solution to a quadratic programming problem, and asymptotic relative efficiencies to various other weights computed. Except under strong dependence, simpler methods are found to be nearly as efficient as the optimal weights. The performance of several practical estimators based on exchangeable and independence working models is explored in simulations.  相似文献   

4.
The asymptotic structure of a vector of weighted sums of signs of residuals, in the general linear model, is studied. The vector can be used as a basis for outlier-detection tests, or alternatively, setting the vector to zero and solving for the parameter yields a class of robust estimators which are analogues of the sample median. Asymptotic results for both estimates and tests are obtained. The question of optimal weights is investigated, and the optimal estimators in the case of simple linear regression are found to coincide with estimators introduced by Adichie.  相似文献   

5.
This paper considers problems of interval estimation and hypotheses testing for the generalized Lorenz curve under the Pareto distribution. Our approach is based on the concepts of generalized test variables and generalized pivotal quantities. The merits of the proposed procedures are numerically carried out and compared with asymptotic and bootstrap methods. Empirical evidence shows that the coverage accuracy of the proposed confidence intervals and the type I error control of the proposed exact tests are satisfactory. For illustration purposes, a real data set on median income of the 20 occupations in the United States Census of Population is analysed.  相似文献   

6.
All existing location-scale rank tests use equal weights for the components. We advocate the use of weighted combinations of statistics. This approach can partly be substantiated by the theory of locally most powerful tests. We specifically investi= gate a Wilcoxon-Mood combination. We give exact critical values for a range of weights. The asymptotic normality of the test statistic is proved under a general hypothesis and Chernoff-Savage conditions. The asymptotic relative efficiency of this test with respect to unweighted combinations shows that a careful choice of weights results in a gain in efficiency.  相似文献   

7.
The quick estimators of location and scale have broad applications and are widely used. For a variety of symmetric populations we obtain the quantiles and the weights for which the asymptotic variances of the quick estimators are minimum. These optimal quick estimators are then used to obtain the asymptotic relative efficiencies of the commonly used estimators such as trimean. gastwirth. median, midrange. and interquartile range with respect to the optimal quick estimators in order to determine a choice among them and to check whether they are unacceptably poor. In the process it is seen that the interquartile range is the optimal quick estimator of scale for Cauchy populations; but the interdecile range is in general preferable. Also the optimal estimator of the location for the logistic distribution puts weights 0.3 on each of the two quartiles and 0.4 on the median. It is shown that for the symmetric distributions, such as the beta and Tukey- lambda with [d] > 0, which have finite support and short tails, i.e. the tail exponents (Parzen, 1979) satisfy [d] < 1, the midrange and the range are the optimal quick estimators of location and scale respectively if [d] < 1/2. The class of such distributions Include the distributions with high discontinuous tails, e.g. Tukey-lambda with [d] > 1, as well as some distributions with p.d.f.'s going to zero at the ends of the finite support, such as Tukey-lambda with 1/2 < [d] < 1. As a byproduct an interesting tail correspondence between beta and Tukey-lambda distributions is seen.  相似文献   

8.
《Statistics》2012,46(6):1396-1436
ABSTRACT

The paper deals with an asymptotic relative efficiency concept for confidence regions of multidimensional parameters that is based on the expected volumes of the confidence regions. Under standard conditions the asymptotic relative efficiencies of confidence regions are seen to be certain powers of the ratio of the limits of the expected volumes. These limits are explicitly derived for confidence regions associated with certain plugin estimators, likelihood ratio tests and Wald tests. Under regularity conditions, the asymptotic relative efficiency of each of these procedures with respect to each one of its competitors is equal to 1. The results are applied to multivariate normal distributions and multinomial distributions in a fairly general setting.  相似文献   

9.
In this paper, attention is focused on estimation of the location parameter in the double exponential case using a weighted linear combination of the sample median and pairs of order statistics, with symmetric distance to both sides from the sample median. Minimizing with respect to weights and distances we get smaller asymptotic variance in the second order. If the number of pairs is taken as infinite and the distances as null we attain the least asymptotic variance in this class of estimators. The Pitman estimator is also noted. Similarly improved estimators are scanned over their probability of concentration to investigate its bound. Numerical comparison of the estimators is shown.  相似文献   

10.
In this article, the asymptotic distribution of the circular median is derived for symmetric distributions on the circle. Its asymptotic relative efficienty with respect to the mean direction and to an estimator proposed by Watson (1983) is then examined. Special attention is given to the cases where the underlying distribution is von Mises and contaminated von Mises. It is seen that the circular median can perform more efficiently than both estimators in presence of outliers.  相似文献   

11.
In non‐randomized biomedical studies using the proportional hazards model, the data often constitute an unrepresentative sample of the underlying target population, which results in biased regression coefficients. The bias can be avoided by weighting included subjects by the inverse of their respective selection probabilities, as proposed by Horvitz & Thompson (1952) and extended to the proportional hazards setting for use in surveys by Binder (1992) and Lin (2000). In practice, the weights are often estimated and must be treated as such in order for the resulting inference to be accurate. The authors propose a two‐stage weighted proportional hazards model in which, at the first stage, weights are estimated through a logistic regression model fitted to a representative sample from the target population. At the second stage, a weighted Cox model is fitted to the biased sample. The authors propose estimators for the regression parameter and cumulative baseline hazard. They derive the asymptotic properties of the parameter estimators, accounting for the difference in the variance introduced by the randomness of the weights. They evaluate the accuracy of the asymptotic approximations in finite samples through simulation. They illustrate their approach in an analysis of renal transplant patients using data obtained from the Scientific Registry of Transplant Recipients  相似文献   

12.
To apply the quasi likelihood method one needs both the mean and the variance functions to determine its optimal weights. If the variance function is unknown, then the weights should be acquired from the data. One way to do so is by adaptive estimation, which involves non-parametric estimation of the variance function. Adaptation, however, also brings in noise that hampers its improvement for moderate samples. In this paper we introduce an alternative method based not on the estimation of the variance function, but on the penalized minimization of the asymptotic variance of the estimator. By doing so we are able to retain a restricted optimality under the smoothness condition, however strong that condition may be. This is important because for moderate sample sizes we need to impose a strong smoothness constraint to damp the noise—often stronger than would be adequate for the adaptive method. We will give a rigorous development of the related asymptotic theory, and provide the simulation evidence for the advantage of this method.  相似文献   

13.
It is known that for nonparametric regression, local linear composite quantile regression (local linear CQR) is a more competitive technique than classical local linear regression since it can significantly improve estimation efficiency under a class of non-normal and symmetric error distributions. However, this method only applies to symmetric errors because, without symmetric condition, the estimation bias is non-negligible and therefore the resulting estimator is inconsistent. In this paper, we propose a weighted local linear CQR method for general error conditions. This method applies to both symmetric and asymmetric random errors. Because of the use of weights, the estimation bias is eliminated asymptotically and the asymptotic normality is established. Furthermore, by minimizing asymptotic variance, the optimal weights are computed and consequently the optimal estimate (the most efficient estimate) is obtained. By comparing relative efficiency theoretically or numerically, we can ensure that the new estimation outperforms the local linear CQR estimation. Finite sample behaviors conducted by simulation studies further illustrate the theoretical findings.  相似文献   

14.
The performance of tests in Aalen's linear regression model is studied using asymptotic power calculations and stochastic simulation. Aalen's original least squares test is compared to two modifications: a weighted least squares test with correct weights and a test where the variance is re-estimated under the null hypothesis. The test with re-estimated variance provides the highest power of the tests for the setting of this paper, and the gain is substantial for covariates following a skewed distribution like the exponential. It is further shown that Aalen's choice for weight function with re-estimated variance is optimal in the one-parameter case against proportional alternatives.  相似文献   

15.
Recent research by Sakata and White (1995) presents the consistency and asymptotic normality of S-estimators in nonlinear regression. It is well known from research in linear regression that it is important to use a consistent high breakdown estimator as an initial estimate when computing an S-estimate. This paper presents the proof of the weak consistency of the least median of squares estimator in a nonlinear regression setting, thus suggesting that it is a reasonable choice for the starting value for computing S-estimates in nonlinear regression.  相似文献   

16.
This article considers nonparametric estimation of reliable life based on ranked set sampling and its properties. It is proven analytically that the large sample efficiency of the reliable life estimator under the balanced ranked set sampling is higher than that under the simple random sampling of the same size, but the relative efficiency damps away as the reliable life moves away from the median on both directions. To improve the efficiency for the estimation of extreme reliable life, we then propose a reliable life estimator under a modified ranked set sampling protocol, its strong consistency and asymptotic normality are established. The proposed sampling is shown to be superior to the balanced ranked set sampling, and the relative advantage improves as the reliable life moves away from median. Finally, results of simulation studies for small sample as well as an application to a real data set are presented to illustrate some of the theoretical findings.  相似文献   

17.
In this paper, we consider testing the location parameter with multilevel (or hierarchical) data. A general family of weighted test statistics is introduced. This family includes extensions to the case of multilevel data of familiar procedures like the t, the sign and the Wilcoxon signed-rank tests. Under mild assumptions, the test statistics have a null limiting normal distribution which facilitates their use. An investigation of the relative merits of selected members of the family of tests is achieved theoretically by deriving their asymptotic relative efficiency (ARE) and empirically via a simulation study. It is shown that the performance of a test depends on the clusters configurations and on the intracluster correlations. Explicit formulas for optimal weights and a discussion of the impact of omitting a level are provided for 2 and 3-level data. It is shown that using appropriate weights can greatly improve the performance of the tests. Finally, the use of the new tests is illustrated with a real data example.  相似文献   

18.
When data are missing, analyzing records that are completely observed may cause bias or inefficiency. Existing approaches in handling missing data include likelihood, imputation and inverse probability weighting. In this paper, we propose three estimators inspired by deleting some completely observed data in the regression setting. First, we generate artificial observation indicators that are independent of outcome given the observed data and draw inferences conditioning on the artificial observation indicators. Second, we propose a closely related weighting method. The proposed weighting method has more stable weights than those of the inverse probability weighting method (Zhao, L., Lipsitz, S., 1992. Designs and analysis of two-stage studies. Statistics in Medicine 11, 769–782). Third, we improve the efficiency of the proposed weighting estimator by subtracting the projection of the estimating function onto the nuisance tangent space. When data are missing completely at random, we show that the proposed estimators have asymptotic variances smaller than or equal to the variance of the estimator obtained from using completely observed records only. Asymptotic relative efficiency computation and simulation studies indicate that the proposed weighting estimators are more efficient than the inverse probability weighting estimators under wide range of practical situations especially when the missingness proportion is large.  相似文献   

19.
Unbalanced-size samples arise naturally in equal-employment cases, as the minority fraction of all employees or applicants are invariably less than one half. Motivated by an actual case in which the median test with no power to detect disparate treatment was accepted in court, we develop a symmetrized form of the control median test having the same asymptotic properties as the median test. Since the actual case concerned the relative merits of the median and Wilcoxon test, a Monte Carlo study of the power of the new test and other nonparametric tests is reported. The results show that the new procedure is more powerful than the ordinary median test in small unbalanced samples. When the data come from a normal or double-exponential law, the Wilcoxon test is however usually superior to either of the others. When the data come from a Cauchy distribution, on the other hand, the powers of the procedures typically are reversed.  相似文献   

20.
We propose a test based on Bonferroni's measure of skewness. The test detects the asymmetry of a distribution function about an unknown median. We study the asymptotic distribution of the given test statistic and provide a consistent estimate of its variance. The asymptotic relative efficiency of the proposed test is computed along with Monte Carlo estimates of its power. This allows us to perform a comparison of the test based on Bonferroni's measure with other tests for symmetry.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号