首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
It is often desirable to test non-nested hypotheses. Cox (1961, 1962) proposed forming a log-likelihood ratio from their maxima and then comparing this value to its expected value under the null hypothesis. Pitfalls exists when we apply Cox's test to the special case of testing normality versus lognormality. Pesaran (1981) and Kotz (1973) pointed out the slow convergence rate of the Cox's test. In this paper, this fact has been reemphasized; moreover, we propose an alternative likelihood ratio test which remedies problems arising from negative estimates of the asymptotic variance of Cox's test statistic and is uniformly more powerful than most commonly used tests.  相似文献   

2.
Conditional bias and asymptotic mean sensitivity curve (AMSC) are useful measures to assess the possible effect of an observation on an estimator when sampling from a parametric model. In this paper we obtain expressions for these measures in truncated distributions and study their theoretical properties. Specific results are given for the UMVUE of a parametric function. We note that the AMSC for the UMVUE in truncated distributions verifies some of the most relevant properties we got in a previous paper for the AMSC of UMVUE in the NEF-QVF case, main differences are also established. As for the conditional bias, since it is a finite sample measure, we include some practical examples to illustrate its behaviour when the sample size increases.  相似文献   

3.
Approximate confidence intervals are given for the lognormal regression problem. The error in the nominal level can be reduced to O(n ?2), where n is the sample size. An alternative procedure is given which avoids the non-robust assumption of lognormality. This amounts to finding a confidence interval based on M-estimates for a general smooth function of both ? and F, where ? are the parameters of the general (possibly nonlinear) regression problem and F is the unknown distribution function of the residuals. The derived intervals are compared using theory, simulation and real data sets.  相似文献   

4.
魏学辉  白仲林 《统计研究》2010,27(8):99-104
常见单位根检验方法对初始值都做了适当的约束,而经验研究中的数据往往由于各种冲击的存在无法满足相应的假定条件。所以,有必要讨论检验功效对初始值稳健的单位根检验方法。本文在研究初始值对单位根检验功效影响的基础上,基于Fisher统计量提出了检验功效关于初始值较稳健的组合p值单位根检验方法并研究了其小样本性质。并且,对我国CPI月环比时间序列的检验发现,随着我国宏观经济调控政策的完善,CPI逐渐趋于平稳。  相似文献   

5.
Random samples are assumed for the univariate two-sample problem. Sometimes this assumption may be violated in that an observation in one “sample”, of size m, is from a population different from that yielding the remaining m—1 observations (which are a random sample). Then, the interest is in whether this random sample of size m—1 is from the same population as the other random sample. If such a violation occurs and can be recognized, and also the non-conforming observation can be identified (without imposing conditional effects), then that observation could be removed and a two-sample test applied to the remaining samples. Unfortunately, satisfactory procedures for such a removal do not seem to exist. An alternative approach is to use two-sample tests whose significance levels remain the same when a non-conforming observation occurs, and is removed, as for the case where the samples were both truly random. The equal-tail median test is shown to have this property when the two “samples” are of the same size (and ties do not occur).  相似文献   

6.
Given a random sample taken on a compact domain S ? d, the authors propose a new method for testing the hypothesis of uniformity of the underlying distribution. The test statistic is based on the distance of every observation to the boundary of S. The proposed test has a number of interesting properties. In particular, it is feasible and particularly suitable for high dimensional data; it is distribution free for a wide range of choices of 5; it can be adapted to the case that the support of S is unknown; and it also allows for one‐sided versions. Moreover, the results suggest that, in some cases, this procedure does not suffer from the well‐known curse of dimensionality. The authors study the properties of this test from both a theoretical and practical point of view. In particular, an extensive Monte Carlo simulation study allows them to compare their methods with some alternative procedures. They conclude that the proposed test provides quite a satisfactory balance between power, computational simplicity, and adaptability to different dimensions and supports.  相似文献   

7.
A truncated sequential sign test for location shift is studied when the null or target location has been estimated from a prior, fixed sample. If the randomness of the target is ignored then the test is shown to be strongly anticonservative, the degree being proportional to the ratio of the truncation point to the fixed sample size. The test is distribution-free under the hypothesis of no shift enabling exact Type I errors and null expected samples sizes to be calculated and compared to a modified Brownian motion approximation. A Monte Carlo power study shows that the test compares favorably with thr test against a Xnown target. An abbreviated table of critical values is given.  相似文献   

8.
In this article, we systematically study the optimal truncated group sequential test on binomial proportions. Through analysis of the cost structure, average test cost is introduced as a new optimality criterion. According to the new criterion, the optimal tests on different design parameters including the boundaries, success discriminant value, stage sample vector, stage size, and the maximum sample size are defined. Since the computation time in finding optimal designs by exhaustive search is intolerably long, group sequential sample space sorting method and procedures are developed to find the near-optimal ones. In comparison with the international standard ISO2859-1, the truncated group sequential designs proposed in this article can reduce the average test costs around 20%.  相似文献   

9.
Row x column interaction is frequently assumed to be negligible in two-way classifications having one observation per cell. Absence of interaction allows the researcher to estimate experimental error and to proceed with making inferences about row and column effects. If additivity is suspect, it is conventional to test it against a structured alternative. If the structured alternative missspecifies the existing nonadditivity, then the power of the test is low, even if the magnitude of the existing nonadditivity is large. The locally best invariant (LBI) test of additivity is less subject to model misspecification because a particular structural alternative need not be hypothesized. This paper illustrates the LBI test of additivity and compares its power to that of the Johnson-Graybill likelihood ratio (LR) test. The LBI test performs as well as the LR test under a Johnson-Graybill alternative and performs better than the LR test under more general alternatives.  相似文献   

10.
In this article a general result is derived that, along with a functional central limit theorem for a sequence of statistics, can be employed in developing a nonparametric repeated significance test with adaptive target sample size. This method is used in deriving a repeated significance test with adaptive target sample size for the shift model. The repeated significance test is based on a functional central limit theorem for a sequence of partial sums of truncated observations. Based on numerical results presented in this article one can conclude that this nonparametric sequential test performs quite well.  相似文献   

11.
Non-central chi-squared distribution plays a vital role in statistical testing procedures. Estimation of the non-centrality parameter provides valuable information for the power calculation of the associated test. We are interested in the statistical inference property of the non-centrality parameter estimate based on one observation (usually a summary statistic) from a truncated chi-squared distribution. This work is motivated by the application of the flexible two-stage design in case–control studies, where the sample size needed for the second stage of a two-stage study can be determined adaptively by the results of the first stage. We first study the moment estimate for the truncated distribution and prove its existence, uniqueness, and inadmissibility and convergence properties. We then define a new class of estimates that includes the moment estimate as a special case. Among this class of estimates, we recommend to use one member that outperforms the moment estimate in a wide range of scenarios. We also present two methods for constructing confidence intervals. Simulation studies are conducted to evaluate the performance of the proposed point and interval estimates.  相似文献   

12.
Our main interest is parameter estimation using maximum entropy methods in the prediction of future events for Homogeneous Poisson Processes when the distribution governing the distribution of the parameters is unknown. We intend to use empirical Bayes techniques and the maximum entropy principle to model the prior information. This approach has also been motivated by the success of the gamma prior for this problem, since it is well known that the gamma maximizes Shannon entropy under appropriately chosen constraints. However, as an alternative, we propose here to apply one of the often used methods to estimate the parameters of the maximum entropy prior. It consists of moment matching, that is, maximizing the entropy subject to the constraint that the first two moments equal the empirical ones and we obtain the truncated normal distribution (truncated below at the origin) as a solution. We also use maximum likelihood estimation (MLE) methods to estimate the parameters of the truncated normal distribution for this case. These two solutions, the gamma and the truncated normal, which maximize the entropy under different constraints are tested as to their effectiveness for prediction of future events for homogeneous Poisson processes by measuring their coverage probabilities, the suitably normalized lengths of their prediction intervals and their goodness-of-fit measured by the Kullback–Leibler criterion and a discrepancy measure. The estimators obtained by these methods are compared in an extensive simulation study to each other as well as to the estimators obtained using the completely noninformative Jeffreys’ prior and the usual frequency methods. We also consider the problem of choosing between the two maximum entropy methods proposed here, that is, the gamma prior and the truncated normal prior, estimated both by matching of the first two moments and, by maximum likelihood, when faced with data and we advocate the use of the sample skewness and kurtosis. The methods are also illustrated on two examples: one concerning the occurrence of mammary tumors in laboratory animals taking part in a carcinogenicity experiment and the other, a warranty dataset from the automobile industry.  相似文献   

13.
The risk of an individual woman having a pregnancy associated with Down's syndrome is estimated given her age, α-fetoprotein, human chorionic gonadotropin, and pregnancy-specific β1-glycoprotein levels. The classical estimation method is based on discriminant analysis under the assumption of lognormality of the marker values, but logistic regression is also applied for data classification. In the present work, we compare the performance of the two methods using a dataset containing the data of almost 89,000 unaffected and 333 affected pregnancies. Assuming lognormality of the marker values, we also calculate the theoretical detection and false positive rates for both the methods.  相似文献   

14.
In this note, we obtain, based on the sample sum, a statistic to test the homogeneity of a random sample from a positive (zero truncated) Lagrangian Poisson distribution given in Consul and Jain (1973). This test statistic conforms, in a special case, to Singh (1978). A goodness-of-fit test statistic for the Borel-Tanner distribution is obtained as a particular case cf our results.  相似文献   

15.
Assuming that there is a linear relationship between the parameters of a two-parameter exponential distribution, the distribution reduces to the one with known coefficient of variation. The problem of testing the scale parameter is considered using fixed sample and sequential testing procedures. A comparison of the two procedures shows that the difference between the fixed sample sizes and the expected sample sizes in the null case is remarkable. Therefore, a truncated test is proposed and its expected sample sizes in the null case are compared with those of the sequential test.  相似文献   

16.
This paper describes a nonparametric approach to make inferences for aggregate loss models in the insurance framework. We assume that an insurance company provides a historical sample of claims given by claim occurrence times and claim sizes. Furthermore, information may be incomplete as claims may be censored and/or truncated. In this context, the main goal of this work consists of fitting a probability model for the total amount that will be paid on all claims during a fixed future time period. In order to solve this prediction problem, we propose a new methodology based on nonparametric estimators for the density functions with censored and truncated data, the use of Monte Carlo simulation methods and bootstrap resampling. The developed methodology is useful to compare alternative pricing strategies in different insurance decision problems. The proposed procedure is illustrated with a real dataset provided by the insurance department of an international commercial company.  相似文献   

17.
张华节  黎实 《统计研究》2013,30(2):95-101
 本文研究了DF类面板数据单位根IPS检验势受时序数据初始值的影响,推导了DF类面板单位根IPS检验统计量在局部备择假设下的极限分布和局部渐近势函数,发现了DF类面板数据单位根IPS检验统计量局部渐近势在异质性局部备择假设下是初始条件的单调递增函数;小样本Monte Carlo模拟分析结果表明,若假设初始条件为零,DF类IPS统计量的检验势将被低估。  相似文献   

18.
A generalization of Anderson's sequential probability ratio test procedure is proposed in which the continuation region is bounded by a pair of converging lines up to a certain stage of the experiment and later by another pair of converging lines until the procedure is truncated at a predetermined stage of the experiment. The OC and the ASN functions have been derived. For certain parameter values the proposed procedure attains lower average sample numbers than that attainable by any other known procedure.  相似文献   

19.
The problem of missing observations in regression models is often solved by using imputed values to complete the sample. As an alternative for static models, it has been suggested to limit the analysis to the periods or units for which all relevant variables are observed. The choice of an imputation procedure affects the asymptotic efficiency of the method used to subsequently estimate the parameters of the model. In this note, we show that the relative asymptotic efficiency of three estimators designed to handle incomplete samples depends on parameters that have a straightforward statistical interpretation. In terms of a gain of asymptotic efficiency, the use of these estimators is equivalent to the observation of a percentage of the values which are actually missing. This percentage depends on three R2-measures only, which can be straightforwardly computed in applied work. Therefore it should be easy in practice to check whether it is worthwhile to use a more elaborate estimator.  相似文献   

20.
A challenge arising in cancer immunotherapy trial design is the presence of a delayed treatment effect wherein the proportional hazard assumption no longer holds true. As a result, a traditional survival trial design based on the standard log‐rank test, which ignores the delayed treatment effect, will lead to substantial loss of statistical power. Recently, a piecewise weighted log‐rank test is proposed to incorporate the delayed treatment effect into consideration of the trial design. However, because the sample size formula was derived under a sequence of local alternative hypotheses, it results in an underestimated sample size when the hazard ratio is relatively small for a balanced trial design and an inaccurate sample size estimation for an unbalanced design. In this article, we derived a new sample size formula under a fixed alternative hypothesis for the delayed treatment effect model. Simulation results show that the new formula provides accurate sample size estimation for both balanced and unbalanced designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号