首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
R.M. Hollander, D.H. Park and F. Proschan [A class of life distributions for aging, J. Amer. Statist. Assoc. 81 (1986) 91–95] introduced the concept of the larger class of life distributions called new better than used of specified age. In practice, one might be interested in the new better than used behaviour at an unknown but estimable age t0. Here, we investigate the testing of new better than used of specified age t0 (NBU-t0) alternatives. A class of test statistics for testing NBU-t0 (t0 is known) based on a U-statistic whose kernel depends on sub-sample minima is proposed. A member of the class of tests proposed by N. Ebrahimi and M. Habbibullah [Testing whether the survival distribution is new better than used of specified age, Biometrika 77 (1990) 212–215] for this problem belongs to the class of tests proposed here. The distributional properties of the class of test statistics are studied. The performances of a few members of the proposed class of tests are studied in terms of Pitman asymptotic relative efficiency. The Pitman ARE values show that the members of the class perform well in comparison with the N. Ebrahimi and M. Habbibullah [Testing whether the survival distribution is new better than used of specified age, Biometrika 77 (1990) 212–215] tests. The proposed class of tests is shown to be consistent for NBU-t0 alternatives.  相似文献   

2.
This article is concerned with the comparison of P-value and Bayesian measure in point null hypothesis for the variance of Normal distribution with unknown mean. First, using fixed prior for test parameter, the posterior probability is obtained and compared with the P-value when an appropriate prior is used for the mean parameter. In the second, lower bounds of the posterior probability of H0 under a reasonable class of prior are compared with the P-value. It has been shown that even in the presence of nuisance parameters, these two approaches can lead to different results in the statistical inference.  相似文献   

3.
Consider k( ? 2) normal populations with unknown means μ1, …, μk, and a common known variance σ2. Let μ[1] ? ??? ? μ[k] denote the ordered μi.The populations associated with the t(1 ? t ? k ? 1) largest means are called the t best populations. Hsu and Panchapakesan (2004) proposed and investigated a procedure RHPfor selecting a non empty subset of the k populations whose size is at most m(1 ? m ? k ? t) so that at least one of the t best populations is included in the selected subset with a minimum guaranteed probability P* whenever μ[k ? t + 1] ? μ[k ? t] ? δ*, where P*?and?δ* are specified in advance of the experiment. This probability requirement is known as the indifference-zone probability requirement. In the present article, we investigate the same procedure RHP for the same goal as before but when k ? t < m ? k ? 1 so that at least one of the t best populations is included in the selected subset with a minimum guaranteed probability P* whatever be the configuration of the unknown μi. The probability requirement in this latter case is termed the subset selection probability requirement. Santner (1976) proposed and investigated a different procedure (RS) based on samples of size n from each of the populations, considering both cases, 1 ? m ? k ? t and k ? t < m ? k. The special case of t = 1 was earlier studied by Gupta and Santner (1973) and Hsu and Panchapakesan (2002) for their respective procedures.  相似文献   

4.
We develop the score test for the hypothesis that a parameter of a Markov sequence is constant over time, against the alternatives that it varies over time, i.e., θt = θ + Ut; t = 1,2,…, where {Ut; t = 1,2,...} is a sequence of independently and identically distributed random variables with mean zero and variance σz u and θ is a fixed constant. The asymptotic null distribution of the test statistic is proved to be normal. We illustrate our procedure by examples and a real life data analysis.  相似文献   

5.
We investigate the asymptotic behavior of the probability density function (pdf) and the cumulative distribution function (cdf) of Student's t-distribution with ν > 0 degrees of freedom (t ν for short) for ν tending to infinity when the argument x = x ν of the pdf (cdf) depends on ν and tends to ± ∞ (?∞). To this end, we consider the ratio of the pdf's (cdf's) of the t ν- and the standard normal distribution. Depending on the choice of the argument x ν, the pdf-ratio (cdf-ratio) tends to 1, a fixed value greater than 1, or to ∞. As a byproduct, we obtain a result for Mill' ratio when x ν → ?∞.  相似文献   

6.
ABSTRACT

Receiver operating-characteristic (ROC) curve is a popular graphical method frequently used in order to study the diagnostic capacity of continuous (bio)markers. When the considered outcome is a time-dependent variable, the direct generalization is known as cumulative/dynamic ROC curve. For a fixed point of time, t, one subject is allocated into the positive group if the event happens before t and into the negative group if the event is not happened at t. The presence of censored subject, which can not be directly assigned into a group, is the main handicap of this approach. The proposed cumulative/dynamic ROC curve estimator assigns a probability to belong to the negative (positive) group to the subjects censored previously to t. The performance of the resulting estimator is studied from Monte Carlo simulations. Some real-world applications are reported. Results suggest that the new estimators provide a good approximation to the real cumulative/dynamic ROC curve.  相似文献   

7.
A randomized procedure is described for constructing an exact test from a test statistic F for which the null distribution is unknown. The procedure is restricted to cases where F is a function of a random element U that has a known distribution under the null hypothesis. The power of the exact randomized test is shown to be greater in some cases than the power of the exact nonrandomized test that could be constructed if the null distribution of Fwere known.  相似文献   

8.
Abstract

This article is concerned with the comparison of Bayesian and classical testing of a point null hypothesis for the Pareto distribution when there is a nuisance parameter. In the first stage, using a fixed prior distribution, the posterior probability is obtained and compared with the P-value. In the second case, lower bounds of the posterior probability of H0, under a reasonable class of prior distributions, are compared with the P-value. It has been shown that even in the presence of nuisance parameters for the model, these two approaches can lead to different results in statistical inference.  相似文献   

9.
A sequence of independent lifetimes X 1, X 2,…, X m , X m+1,…, X n were observed from the mixture of a degenerate and left-truncated exponential (LTE) distribution, with reliability R at time τ and minimum life length η with unknown proportion p 1 and θ1 but later it was found that there was a change in the system at some point of time m and it is reflected in the sequence after X m by change in reliability R at time τ and unknown proportion p 2 and θ2. This distribution occurs in many practical situations, for instance; life of a unit may have a LTE distribution but some of the units fail instantaneously. Apart from mixture distributions, the phenomenon of change point is also observed in several situations in life testing and reliability estimation problems. It may happen that at some point of time instability in the sequence of failure times is observed. The problem of study is: When and where this change has started occurring. This is called change point inference problem. The estimators of m, R 1(t 0), R 2(t 0), p 1, and p 2 are derived under asymmetric loss functions namely Linex loss & general entropy loss functions. Both the non informative and informative prior are considered. The effects of prior consideration on Bayes estimates of change point are also studied.  相似文献   

10.
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably.

The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon.

In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test.

Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples.  相似文献   

11.
The indirect mechanism of action of immunotherapy causes a delayed treatment effect, producing delayed separation of survival curves between the treatment groups, and violates the proportional hazards assumption. Therefore using the log‐rank test in immunotherapy trial design could result in a severe loss efficiency. Although few statistical methods are available for immunotherapy trial design that incorporates a delayed treatment effect, recently, Ye and Yu proposed the use of a maximin efficiency robust test (MERT) for the trial design. The MERT is a weighted log‐rank test that puts less weight on early events and full weight after the delayed period. However, the weight function of the MERT involves an unknown function that has to be estimated from historical data. Here, for simplicity, we propose the use of an approximated maximin test, the V0 test, which is the sum of the log‐rank test for the full data set and the log‐rank test for the data beyond the lag time point. The V0 test fully uses the trial data and is more efficient than the log‐rank test when lag exits with relatively little efficiency loss when no lag exists. The sample size formula for the V0 test is derived. Simulations are conducted to compare the performance of the V0 test to the existing tests. A real trial is used to illustrate cancer immunotherapy trial design with delayed treatment effect.  相似文献   

12.
Kraft, Lepage, and van Eeden (1985) have suggested using a symmetrized version of the kernel estimator when the true density f of the observation is known to be symmetric around a possibly unknown point θ. The effect of this symmetrization device depends on the smoothness of f * f(x) = f f(x+t)f(t) dt at zero. We show that if θ has to be estimated and if f is not absolutely continuous, symmetrization may deteriorate the estimate.  相似文献   

13.
Suppose that the length of time in years for which a business operates until failure has a Pareto distribution. Let t 1?t 2?t r denote the survival lifetimes of the first r of a random sample of n businesses. Bayesian predictions are to be made on the ordered failure times of the remaining (n???r) businesses, using the conditional probability function. Numerical examples are given to illustrate our results.  相似文献   

14.
This paper considers estimation of the function g in the model Yt = g(Xt ) + ?t when E(?t|Xt) ≠ 0 with nonzero probability. We assume the existence of an instrumental variable Zt that is independent of ?t, and of an innovation ηt = XtE(Xt|Zt). We use a nonparametric regression of Xt on Zt to obtain residuals ηt, which in turn are used to obtain a consistent estimator of g. The estimator was first analyzed by Newey, Powell & Vella (1999) under the assumption that the observations are independent and identically distributed. Here we derive a sample mean‐squared‐error convergence result for independent identically distributed observations as well as a uniform‐convergence result under time‐series dependence.  相似文献   

15.
The distribution of the Quandt likelihood ratio λt. for a two-phase regression, has yet to be determined. In particular it is known that-2 log λt. is not distributed as chi-square (Quandt; 1960), for unknown switch point.

In this paper we describe sampling experiments which suggest that-log λt, has a Pearson Type III distribution, The parameters of the distribution appear to depend not only on the values of the x-vector (Feder; 1968) but also its dimension k.  相似文献   

16.
One method of assessing the fit of an event history model is to plot the empirical standard deviation of standardised martingale residuals. We develop an alternative procedure which is valid also in the presence of measurement error and applicable to both longitudinal and recurrent event data. Since the covariance between martingale residuals at times t 0 and t > t 0 is independent of t, a plot of these covariances should, for fixed t 0, have no time trend. A test statistic is developed from the increments in the estimated covariances, and we investigate its properties under various types of model misspecification. Applications of the approach are presented using two Brazilian studies measuring daily prevalence and incidence of infant diarrhoea and a longitudinal study into treatment of schizophrenia.  相似文献   

17.
Traditionally, when applying the two-sample t test, some pre-testing occurs. That is, the theory-based assumptions of normal distributions as well as of homogeneity of the variances are often tested in applied sciences in advance of the tried-for t test. But this paper shows that such pre-testing leads to unknown final type-I- and type-II-risks if the respective statistical tests are performed using the same set of observations. In order to get an impression of the extension of the resulting misinterpreted risks, some theoretical deductions are given and, in particular, a systematic simulation study is done. As a result, we propose that it is preferable to apply no pre-tests for the t test and no t test at all, but instead to use the Welch-test as a standard test: its power comes close to that of the t test when the variances are homogeneous, and for unequal variances and skewness values |γ 1| < 3, it keeps the so called 20% robustness whereas the t test as well as Wilcoxon’s U test cannot be recommended for most cases.  相似文献   

18.
We investigate the interplay of smoothness and monotonicity assumptions when estimating a density from a sample of observations. The nonparametric maximum likelihood estimator of a decreasing density on the positive half line attains a rate of convergence of [Formula: See Text] at a fixed point t if the density has a negative derivative at t. The same rate is obtained by a kernel estimator of bandwidth [Formula: See Text], but the limit distributions are different. If the density is both differentiable at t and known to be monotone, then a third estimator is obtained by isotonization of a kernel estimator. We show that this again attains the rate of convergence [Formula: See Text], and compare the limit distributions of the three types of estimators. It is shown that both isotonization and smoothing lead to a more concentrated limit distribution and we study the dependence on the proportionality constant in the bandwidth. We also show that isotonization does not change the limit behaviour of a kernel estimator with a bandwidth larger than [Formula: See Text], in the case that the density is known to have more than one derivative.  相似文献   

19.
ABSTRACT

A general theory for a case where a factor has both fixed and random effect levels is developed under one-way treatment structure model. Estimation procedures for the fixed effects and variance components are consider for the model. The testing of fixed effects is considered when the variance–covariance matrix is known and unknown. Confidence intervals for estimable functions and prediction intervals for predictable functions are constructed. The computational procedures are illustrated using data from an on-farm trial.  相似文献   

20.
Given k( ? 3) independent normal populations with unknown means and unknown and unequal variances, a single-stage sampling procedure to select the best t out of k populations is proposed and the procedure is completely independent of the unknown means and the unknown variances. For various combinations of k and probability requirement, tables of procedure parameters are provided for practitioners.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号