首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An estimator, λ is proposed for the parameter λ of the log-zero-Poisson distribution. While it is not a consistent estimator of λ in the usual statistical sense, it is shown to be quite close to the maximum likelihood estimates for many of the 35 sets of data on which it is tried. Since obtaining maximum likelihood estimates is extremely difficult for this and other contagious distributions, this estimate will act at least as an initial estimate in solving the likelihood equations iteratively. A lesson learned from this experience is that in the area of contagious distributions, variability is so large that attention should be focused directly on the mean squared error and not on consistency or unbiasedness, whether for small samples or for the asymptotic case. Sample sizes for some of the data considered in the paper are in hundreds. The fact that the estimator which is not a consistent estimator of λ is closer to the maximum likeli-hood estimator than the consistent moment estimator shows that the variability is large enough to not permit consistency to materialize even for such large sample sizes usually available in actual practice.  相似文献   

2.
When testing sequentially with grouped data, we may or may not adjust the test statistic to allow for the grouping. If the test statistic is not adjusted, the whole power curve will be altered by the grouping, while if the statistic is adjusted, the power curve is altered, but the probability levels or errors of the first and second kind, being distribution-free, are unchanged. It is shown, even for very coarse grouping of the data, that the power curve is only slightly affected by using the unadjusted statistic for tests on the mean or the variance of a normal distribution. For certain other distributions, however, in particular the negative exponential distribution, this procedure breaks down completely if the group width exceeds a certain value.  相似文献   

3.
There may be situations in which either the reliability data do not fit to popular lifetime models or the estimation of the parameters is not easy, while there may be other distributions which are not popular but either they provide better goodness-of-fit or have a smaller number of parameters to be estimated, or they have both the advantages. This paper proposes the Maxwell distribution as a lifetime model and supports its usefulness in the reliability theory through real data examples. Important distributional properties and reliability characteristics of this model are elucidated. Estimation procedures for the parameter, mean life, reliability and failure-rate functions are developed. In view of cost constraints and convenience of intermediate removals, the progressively Type-II censored sample information is used in the estimation. The efficiencies of the estimates are studied through simulation. Apart from researchers and practitioners in the reliability theory, the study is also useful for scientists in physics and chemistry, where the Maxwell distribution is widely used.  相似文献   

4.
For the analysis of a time-to-event endpoint in a single-arm or randomized clinical trial it is generally perceived that interpretation of a given estimate of the survival function, or the comparison between two groups, hinges on some quantification of the amount of follow-up. Typically, a median of some loosely defined quantity is reported. However, whatever median is reported, is typically not answering the question(s) trialists actually have in terms of follow-up quantification. In this paper, inspired by the estimand framework, we formulate a comprehensive list of relevant scientific questions that trialists have when reporting time-to-event data. We illustrate how these questions should be answered, and that reference to an unclearly defined follow-up quantity is not needed at all. In drug development, key decisions are made based on randomized controlled trials, and we therefore also discuss relevant scientific questions not only when looking at a time-to-event endpoint in one group, but also for comparisons. We find that different thinking about some of the relevant scientific questions around follow-up is required depending on whether a proportional hazards assumption can be made or other patterns of survival functions are anticipated, for example, delayed separation, crossing survival functions, or the potential for cure. We conclude the paper with practical recommendations.  相似文献   

5.

This paper presents a method of customizing goodness-of-fit tests that transforms the empirical distribution function in such a way as to create tests for certain alternatives. Using the @ , g transform described in Blom(1958), one can create non-parametric tests for an assortment of alternative distributions. As examples, three new ( f , g )-corrected Kolmogorov-Smirnov tests for goodness-of-fit are discussed. One of these tests is powerful for testing whether or not the data come from an alternative that is heavier in the tails. Another test identifies whether or not the data come from an alternative which is heavier in the middle of the distribution. The last test identifies if the data come from an alternative in which the first or third quartile is far from the corresponding quartile of the hypothesized distribution. The behavior of the three new tests is investigated through a power study.  相似文献   

6.
In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates.  相似文献   

7.
Bayesian sample size estimation for equivalence and non-inferiority tests for diagnostic methods is considered. The goal of the study is to test whether a new screening test of interest is equivalent to, or not inferior to the reference test, which may or may not be a gold standard. Sample sizes are chosen by the model performance criteria of average posterior variance, length and coverage probability. In the absence of a gold standard, sample sizes are evaluated by the ratio of marginal probabilities of the two screening tests; whereas in the presence of gold standard, sample sizes are evaluated by the measures of sensitivity and specificity.  相似文献   

8.
Although the importance of statistics in many phases of university activities is well-known, many universities have not given serious thought to the optimum organizational structure for statistics. This article addresses this problem in two parts: 1. Principles and/or goals of optimum administrative structures for statistics, and 2. Discussion of the most popular organizational structures now in existence together with comments on the advantages and disadvantages of each. The article concludes that a central statistics unit (department) appears the most optimal structure, although it is not without certain disadvantages and/or dangers.  相似文献   

9.
The emphasis in the literature is on normalizing transformations, despite the greater importance of the homogeneity of variance in analysis. A strategy for a choice of variance-stabilizing transformation is suggested. The relevant component of variation must be identified and, when this is not within-subject variation, a major explanatory variable must also be selected to subdivide the data. A plot of group standard deviation against group mean, or log standard deviation against log mean, may identify a simple power transformation or shifted log transformation. In other cases, within the shifted Box-Cox family of transformations, a contour plot to show the region of minimum heterogeneity defined by an appropriate index is proposed to enable an informed choice of transformation. If used in conjunction with the maximum-likelihood contour plot for the normalizing transformation, then it is possible to assess whether or not there exists a transformation that satisfies both criteria.  相似文献   

10.
This study applies extreme-value theory to daily international stock-market returns to determine (1) whether or not returns follow a heavy-tailed stable distribution, (2) the likelihood of an extreme return, such as a 20% drop in a single day, and (3) whether or not the likelihood of an extreme event has changed since October 1987. Empirical results reject a heavy-tailed stable distribution for returns. Instead, a Student-t distribution or an autoregressive conditional heteroscedastic process is better able to capture the salient features of returns. We find that the likelihood of a large single-day return diff ers widely across markets and, for the G-7 countries, the 1987 stock-market drop appears to be largely an isolated event. A drop of this magnitude, however, is not rare in the case of Hong Kong. Finally, there is only limited evidence that the chance of a large single-day decline is more likely since the October 1987 market drop; however, exceptions include stock markets in Germany, The Netherlands and the UK.  相似文献   

11.
This paper presents at an elementary level a unified presentation of concepts related to sufficiency and minimal sufficiency. Extensively discussed are techniques for showing in a particular statistical model that a given statistic is not sufficient or that a given sufficient statistic is not minimal. The applicability of these techniques is illustrated in three examples.  相似文献   

12.
MODELS AND DESIGNS FOR EXPERIMENTS WITH MIXTURES   总被引:2,自引:0,他引:2  
Properties such as the tensile strength of an alloy of. different metals and the freezing point of a mixture of liquid chemicals, depend on the proportions (by weight or volume) of the components present and not on the total amount of the mixture. In choosing a model to relate such a property to the proportions of the various components of the mixture, there arise intriguing difficulties due to the fact that proportions sum to unity. It is demonstrated how to construct models which allow for the possibility of inactive components (components that do not affect the property at all) or components with additive effects. The design of experiments to fit such models to data is then discussed with a view to determining whether a given component is inactive or has an additive effect. The optimal allocation of observations to simplex-lattice designs is considered for one of these models. The construction of D -optimal designs for these models is an open problem.  相似文献   

13.
A fully nonparametric model may not perform well or when the researcher wants to use a parametric model but the functional form with respect to a subset of the regressors or the density of the errors is not known. This becomes even more challenging when the data contain gross outliers or unusual observations. However, in practice the true covariates are not known in advance, nor is the smoothness of the functional form. A robust model selection approach through which we can choose the relevant covariates components and estimate the smoothing function may represent an appealing tool to the solution. A weighted signed-rank estimation and variable selection under the adaptive lasso for semi-parametric partial additive models is considered in this paper. B-spline is used to estimate the unknown additive nonparametric function. It is shown that despite using B-spline to estimate the unknown additive nonparametric function, the proposed estimator has an oracle property. The robustness of the weighted signed-rank approach for data with heavy-tail, contaminated errors, and data containing high-leverage points are validated via finite sample simulations. A practical application to an economic study is provided using an updated Canadian household gasoline consumption data.  相似文献   

14.
Bartholomew's statistics for testing homogeneity of normal means with ordered alternatives have null distributions which are mixtures of chi-squared or beta distributions according as the variances are known or not. If the sample sizes are not equal, the mixing coefficients can be difficult to compute. For a simple order and a simple tree ordering, approximations to the significance levels of these tests have been developed which are based on patterns in the weight sets. However, for a moderate or large number of means, these approximations can be tedious to implement. Employing the same approach that was used in the development of these approximations, two-moment chisquared and beta approximations are derived for these significance levels. Approximations are also developed for the testing situation in which the order restriction is the null hypothesis. Numerical studies show that in each of the cases the two-moment approximation is quite satisfactory for most practical purposes.  相似文献   

15.
Confounding is very fundamental to the design and analysis of studies of causal effects. A variable is not a confounder if it is not a risk factor to disease or if it has the same distribution in the exposed and unexposed population. Whether or not to adjust for a non confounder to improve the precision of estimation has been argued by many authors. This article shows that if C is a non confounder, the pooled and standardized (log) relative risk estimators are asymptotic normal distributions with the mean being the true (log) relative risk, and that the asymptotic variance of the pooled (log) relative risk estimator is less than that of the stratified estimator.  相似文献   

16.
目前,中国环境价值评估领域中基于选择联合分析法的案例未见报道。运用基于选择联合分析法分析城市居民对水环境变化的支付意愿,选取1 040个样本户,实地调查江西省九江市柘林湖下游城市居民对柘林湖水环境变化的支付意愿,结果表明:基于选择联合分析法能够更精确地揭示边际支付意愿的变动,Clogit模型分析结果显示居民对水质改善的偏好高于对供水稳定性改善的偏好;整体样本组不含交叉项模型支付意愿估计结果为17.57元/月/户,含交叉项模型估计结果为17.54元/月/户,是否加入收入与环境属性交叉项对支付意愿估值影响不大。不考虑抗议性群体的分析表明,直接估计样本量的支付意愿会造成估值结果低估;非本地户籍家庭较本地户籍家庭对公共物品"搭便车"的心理更加明显;非环保组的支付意愿要高于环保组的支付意愿;低收入组的边际支付意愿要高于样本整体组,与预期结果不一致,建议进一步深入研究。  相似文献   

17.
Sunset Salvo     
The Wilcoxon—Mann—Whitney test enjoys great popularity among scientists comparing two groups of observations, especially when measurements made on a continuous scale are non-normally distributed. Triggered by different results for the procedure from two statistics programs, we compared the outcomes from 11 PC-based statistics packages. The findings were that the delivered p values ranged from significant to nonsignificant at the 5% level, depending on whether a large-sample approximation or an exact permutation form of the test was used and, in the former case, whether or not a correction for continuity was used and whether or not a correction for ties was made. Some packages also produced pseudo-exact p values, based on the null distribution under the assumption of no ties. A further crucial point is that the variant of the algorithm used for computation by the packages is rarely indicated in the output or documented in the Help facility and the manuals. We conclude that the only accurate form of the Wilcoxon—Mann—Whitney procedure is one in which the exact permutation null distribution is compiled for the actual data.  相似文献   

18.
The properties of a distribution-free rank-like test proposed by Moses (1963) for the twosample scale problem is studied and a modification of the test using Savage scores is proposed. It is shown that this rank-like test is superior to commonly used rank tests for scale in that it:(1) does not require the estimation of any location or centrality parameter, (2) does not require equal or known location parameters, (3) is robust for skewed data, (4) is resolving and (5) has some significant power advantages. The test is shown to be asymptotically normal, and asymptotic relative efficiencies are calculated. Power properties, studied via simulation, indicate that the test is especially well suited for testing for equality of scale when the data are sampled from skewed populations with unequal medians. Extensions to the J-sample problem are indicated.  相似文献   

19.
Overdispersion or extra variation is a common phenomenon that occurs when binomial (multinomial) data exhibit larger variances than that permitted by the binomial (multinomial) model. This arises when the data are clustered or when the assumption of independence is violated. Goodness-of-fit (GOF) tests available in the overdispersion literature have focused on testing for the presence of overdispersion in the data and hence they are not applicable for choosing between the several competing overdispersion models. In this paper, we consider a GOF test proposed by Neerchal and Morel [1998. Large cluster results for two parametric multinomial extra variation models. J. Amer. Statist. Assoc. 93(443), 1078–1087], and study its distributional properties and performance characteristics. This statistic is a direct analogue of the usual Pearson chi-squared statistic, but is also applicable when the clusters are not necessarily of the same size. As this test statistic is for testing model adequacy against the alternative that the model is not adequate, it is applicable in testing two competing overdispersion models.  相似文献   

20.
Abstract

Each proposal for Open Access (OA) has its unique combination of features; each argument for or against OA focuses on particular features or criteria. This article is intended to discuss these criteria, both individually and also as each of them contributes to the different proposals for OA. Evaluation of the proposals themselves is not attempted. This discussion is intended to be of value to the supporters of OA, in choosing which plan to adopt, and to those opposed to OA, in showing where the weaknesses do and do not lie. In other words, this article intends to improve the level of factual understanding in the ongoing discussions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号