首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 39 毫秒
1.
In this article, we consider empirical likelihood inference for the parameter in the additive partially linear models when the linear covariate is measured with error. By correcting for attenuation, a corrected-attenuation empirical log-likelihood ratio statistic for the unknown parameter β, which is of primary interest, is suggested. We show that the proposed statistic is asymptotically standard chi-square distribution without requiring the undersmoothing of the nonparametric components, and hence it can be directly used to construct the confidence region for the parameter β. Some simulations indicate that, in terms of comparison between coverage probabilities and average lengths of the confidence intervals, the proposed method performs better than the profile-based least-squares method. We also give the maximum empirical likelihood estimator (MELE) for the unknown parameter β, and prove the MELE is asymptotically normal under some mild conditions.  相似文献   

2.
In “before” and “after” surveys of Attitudes Towards Random Breath Testing in South Australia, three basic versions of the questionnaire were used. In the first, a set of “lead-up” questions, which were designed to deliberately bias the results towards acceptance of the tests, was included before the main questions; in the second, there were no lead-up questions; in the third, a different set of lead-up questions was used, and was aimed at deliberately biasing the results against the tests. The results in two out of the four attempts to influence the answers (compared with no lead-up questions) were significant in the expected direction, and in the other two cases were in the correct direction but not significant. The difference between the positive-and negative-biasing versions was highly significant in both cases. It is important to be aware that changes in context rather than in question wording per se can give rise to effects which dwarf the sampling error.  相似文献   

3.
In the prospective study of a finely stratified population, one individual from each stratum is chosen at random for the “treatment” group and one for the “non-treatment” group. For each individual the probability of failure is a logistic function of parameters designating the stratum, the treatment and a covariate. Uniformly most powerful unbiased tests for the treatment effect are given. These tests are generally cumbersome but, if the covariate is dichotomous, the tests and confidence intervals are simple. Readily usable (but non-optimal) tests are also proposed for poly-tomous covariates and factorial designs. These are then adapted to retrospective studies (in which one “success” and one “failure” per stratum are sampled). Tests for retrospective studies with a continuous “treatment” score are also proposed.  相似文献   

4.
This paper deals with the asymptotics of a class of tests for association in 2-way contingency tables based on square forms in cell frequencies, given the total number of observations (multinomial sampling) or one set of marginal totals (stratified sampling). The case when both row and column marginal totals are fixed (hypergeometric sampling) was studied in Kulinskaya (1994), The class of tests under consideration includes a number of classical measures for association, Its two subclasses are the tests based on statistics using centralized cell frequencies (asymptotically distributed as weighted sums of central chi-squares) and those using the non-centralized cell frequencies (asymptotically normal). The parameters of asymptotic distributions depend on the sampling model and on true marginal probabilities. Maximum efficiency for asymptotically normal statistics is achieved under hypergeometric sampling, If the cell frequencies or the statistic as a whole are centralized using marginal proportions as estimates for marginal probabilities, the asymptotic distribution does not differ much between models and it is equivalent to that under hypergeometric sampling. These findings give an extra justification for the use of permutation tests for association (which are based on hypergeometric sampling). As an application, several well known measures of association are analysed.  相似文献   

5.
This article focuses on estimating rare events using multilevel splitting schemes. The event of interest is that a Markov process enters some rare set before another (“tabu”) set. It is known that in this setting a large deviations analysis is not always sufficient for constructing asymptotically efficient importance sampling schemes; additional modifications to the change of measure suggested by large deviations are needed. As an alternative, we design an asymptotically efficient multilevel splitting scheme that relies on the large deviations analysis only. This property makes it more flexible and easier to implement than corresponding importance sampling schemes.  相似文献   

6.
In “stepwise” regression analysis, the usual procedure enters or removes variables at each “step” on the basis of testing whether certain partial correlation coefficients are zero. An alternative method suggested in this paper involves testing the hypothesis that the mean square error of prediction does not decrease from one step to the next. This is equivalent to testing that the partial correlation coefficient is equal to a certain nonzero constant. For sample sizes sufficiently large, Fisher's z transformation can be used to obtain an asymptotically UMP unbiased test. The two methods are contrasted with an example involving actual data.  相似文献   

7.
The authors show how an adjusted pseudo‐empirical likelihood ratio statistic that is asymptotically distributed as a chi‐square random variable can be used to construct confidence intervals for a finite population mean or a finite population distribution function from complex survey samples. They consider both non‐stratified and stratified sampling designs, with or without auxiliary information. They examine the behaviour of estimates of the mean and the distribution function at specific points using simulations calling on the Rao‐Sampford method of unequal probability sampling without replacement. They conclude that the pseudo‐empirical likelihood ratio confidence intervals are superior to those based on the normal approximation, whether in terms of coverage probability, tail error rates or average length of the intervals.  相似文献   

8.
In this paper we propose test statistics based on Fisher's method of combining tests for hypotheses involving two or more parameters simultaneously, It Is shown that these tests are asymptotically efficient In the sense of Bahadur, It is then shown how these tests can be modified to give sequential test procedures which are efficient in the sense of Berk and Brown (1978).

The results in section 3 generalize the work of Perng (1977) and Durairajan (1980).  相似文献   

9.
This paper elaborates on earlier contributions of Bross (1985) and Millard (1987) who point out that when conducting conventional hypothesis tests in order to “prove” environmental hazard or environmental safety, unrealistically large sample sizes are required to achieve acceptable power with customarily-used values of Type I error probability. These authors also note that “proof of safety” typically requires much larger sample sizes than “proof of hazard”. When the sample has yet to be selected and it is feared that the sample size will be insufficient to conduct a reasonable.  相似文献   

10.
The type I and II error rates of several statistical tests for seasonality in monthly data were investigated through a computer simulation study at two nominal significance levels, α=1% and α=5%. Three models were used for the variation: annual sinusoidal; semi—annual sinusoidal; and a curve which is constant in all but three consecutive months of the year, when it exhibits a constant increase (a “one—pulse” model). The statistical tests are compared in terms of the simulation results. These results may be applied to calculate either the sample size required to detect seasonal variation of fixed amplitude or the probability of detecting seasonal variation of variable amplitude with a fixed sample size. A numerical case study is given  相似文献   

11.
Abstract.  Pareto sampling was introduced by Rosén in the late 1990s. It is a simple method to get a fixed size π ps sample though with inclusion probabilities only approximately as desired. Sampford sampling, introduced by Sampford in 1967, gives the desired inclusion probabilities but it may take time to generate a sample. Using probability functions and Laplace approximations, we show that from a probabilistic point of view these two designs are very close to each other and asymptotically identical. A Sampford sample can rapidly be generated in all situations by letting a Pareto sample pass an acceptance–rejection filter. A new very efficient method to generate conditional Poisson ( CP ) samples appears as a byproduct. Further, it is shown how the inclusion probabilities of all orders for the Pareto design can be calculated from those of the CP design. A new explicit very accurate approximation of the second-order inclusion probabilities, valid for several designs, is presented and applied to get single sum type variance estimates of the Horvitz–Thompson estimator.  相似文献   

12.
Although several authors have indicated that the median test has low power in small samples, it continues to be presented in many statistical textbooks, included in a number of popular statistical software packages, and used in a variety of application areas. We present results of a power simulation study that shows that the median test has noticeably lower power, even for the double exponential distribution for which it is asymptotically most powerful, than other readily available rank tests. We suggest that the median test be “retired” from routine use and recommend alternative rank tests that have superior power over a relatively large family of symmetric distributions.  相似文献   

13.
C. R. Rao pointed out that “The role of statistical methodology is to extract the relevant information from a given sample to answer specific questions about the parent population” and raised the question “What population does a sample represent”? Wrong specification can lead to invalid inference giving rise to a third kind of error. Rao introduced the concept of weighted distributions as a method of adjustment applicable to many situations.

In this paper, we study the relationship between the weighted distributions and the parent distributions in the context of reliability and life testing. These relationships depend on the nature of the weight function and give rise to interesting connections between the different ageing criteria of the two distributions. As special cases, the length biased distribution, the equilibrium distribution of the backward and forward recurrence times and the residual life distribution, which frequently arise in practice, are studied and their relationships with the original distribution are examined. Their survival functions, failure rates and mean residual life functions are compared and some characterization results are established.  相似文献   

14.
Since the 1960s the Bayesian case against frequentist inference has been partly built on several “classic” examples which are devised to show how frequentist inference procedures can give rise to fallacious results; see Berger and Wolpert (1988) [2]. The primary aim of this note is to revisit one of these examples, the Berger location model, that is supposed to demonstrate the fallaciousness of frequentist Confidence Interval (CI) estimation. A closer look at the example, however, reveals that the fallacious results stem primarily from the problematic nature of the example itself, since it is based on a non-regular probability model that enables one to (indirectly) assign probabilities to the unknown parameter. Moreover, the proposed confidence set is not a proper frequentist CI in the sense that it is not defined in terms of legitimate error probabilities.  相似文献   

15.
The confidence interval of the Kaplan–Meier estimate of the survival probability at a fixed time point is often constructed by the Greenwood formula. This normal approximation-based method can be looked as a Wald type confidence interval for a binomial proportion, the survival probability, using the “effective” sample size defined by Cutler and Ederer. Wald-type binomial confidence interval has been shown to perform poorly comparing to other methods. We choose three methods of binomial confidence intervals for the construction of confidence interval for survival probability: Wilson's method, Agresti–Coull's method, and higher-order asymptotic likelihood method. The methods of “effective” sample size proposed by Peto et al. and Dorey and Korn are also considered. The Greenwood formula is far from satisfactory, while confidence intervals based on the three methods of binomial proportion using Cutler and Ederer's “effective” sample size have much better performance.  相似文献   

16.
TERESA Ledwina 《Statistics》2013,47(4):565-570
Admissibility of some asymptotically optimal tests of independence against “positive dependence” in a R × C contingency table is deduced from earlier results of the author. “Positive dependence” is specified to be positive regression dependence and positive quadrant dependence.  相似文献   

17.
Oracle Inequalities for Convex Loss Functions with Nonlinear Targets   总被引:1,自引:1,他引:0  
This article considers penalized empirical loss minimization of convex loss functions with unknown target functions. Using the elastic net penalty, of which the Least Absolute Shrinkage and Selection Operator (Lasso) is a special case, we establish a finite sample oracle inequality which bounds the loss of our estimator from above with high probability. If the unknown target is linear, this inequality also provides an upper bound of the estimation error of the estimated parameter vector. Next, we use the non-asymptotic results to show that the excess loss of our estimator is asymptotically of the same order as that of the oracle. If the target is linear, we give sufficient conditions for consistency of the estimated parameter vector. We briefly discuss how a thresholded version of our estimator can be used to perform consistent variable selection. We give two examples of loss functions covered by our framework.  相似文献   

18.
This paper addresses the problem of the probability density estimation in the presence of covariates when data are missing at random (MAR). The inverse probability weighted method is used to define a nonparametric and a semiparametric weighted probability density estimators. A regression calibration technique is also used to define an imputed estimator. It is shown that all the estimators are asymptotically normal with the same asymptotic variance as that of the inverse probability weighted estimator with known selection probability function and weights. Also, we establish the mean squared error (MSE) bounds and obtain the MSE convergence rates. A simulation is carried out to assess the proposed estimators in terms of the bias and standard error.  相似文献   

19.
This paper discusses a pre-test regression estimator which uses the least squares estimate when it is “large” and a ridge regression estimate for “small” regression coefficients, where the preliminary test is applied separately to each regression coefficient in turn to determine whether it is “large” or “small.” For orthogonal regressors, the exact finite-sample bias and mean squared error of the pre-test estimator are derived. The latter is less biased than a ridge estimator, and over much of the parameter space the pre-test estimator has smaller mean squared error than least squares. A ridge estimator is found to be inferior to the pre-test estimator in terms of mean squared error in many situations, and at worst the latter estimator is only slightly less efficient than the former at commonly used significance levels.  相似文献   

20.
We propose replacing the usual Student's-t statistic, which tests for equality of means of two distributions and is used to construct a confidence interval for the difference, by a biweight-“t” statistic. The biweight-“t” is a ratio of the difference of the biweight estimates of location from the two samples to an estimate of the standard error of this difference. Three forms of the denominator are evaluated: weighted variance estimates using both pooled and unpooled scale estimates, and unweighted variance estimates using an unpooled scale estimate. Monte Carlo simulations reveal that resulting confidence intervals are highly efficient on moderate sample sizes, and that nominal levels are nearly attained, even when considering extreme percentage points.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号